Indie Dev

Hello Guest!. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, sell your games, upload content, as well as connect with other members through your own private inbox!

Essy's Intelligent AI Demo {Pre-Development}

Essy

Towns Guard
Xy$
0.00
So this post wont be so formal as it's less of a product to sell as a concept. For my AI class I had to decide on an independent project based on my relevant domain.

As a game designer I figured I should focus on a game environment. So using RPG Maker and Pearl ABS I will be developing intelligent dueling AIs.

The project will have two options-
1. Play against the AI itself. For simplicity the AI and yourself will have the same exact moveset/statistics.
2. Have the AI take over your character to fight itself.

The goal is to create a very strong and robust AI. As a result it may end up being difficult for human players to defeat. But as my project reaches those stages I hope rpgmakermv.co doesn't mind testing it out and providing feedback.

I will probably use these experiences in my other project and in the future implement such AI for use in MV.

That's all for the moment, this is not an april fools thread. : )

EDIT:
https://www.mediafire.com/?lm72yg2jly9y4c4
This is what I'll have done for my presentation at Uni. At the very least this gave me quite an insight in AI design in platforms like RPG Maker as well as just how serious language performance constraints are.
I picked my favorite configuration to show off but its unencrypted so you can try out the other configurations if you want.
You'll notice that they melee swing a lot and don't invest a lot of time in getting up closer.

The reason for this is the search depth. It's unrealistic for me to have a search depth greater than 2-ply in this real time environment. If this were python I could probably get to 3-ply or 6-ply in c++(possibly 8-ply with optimizations)
Basically it doesn't even consider it might get close so it doesn't consider melee being a viable option. On the otherhand it'll still use it if there happens to be a tie between every action.

I wish it would've been more involved with the community (like I hoped it would be) but I hope you enjoy the final product. : )

If a mod wants to move this to 'completed' they can. But as it was never meant to be more than a 'demo' it might be fine to leave it here.
 
Last edited:

Macro

Pantologist
Xy$
0.00
The funny thing about RPGs is that it's not really the AI that's the issue. Most RPGs are a simple repetition of spamming your strongest damaging abilities. It's the fact that they usually make the monsters less stronger than the heroes.

If you could develop AI that would switch out party members to expose your weaknesses, CC your healer at a pivotal moment, taunt your dps, etc. that would be pretty amazing (and probably frustrating haha)!
 

Essy

Towns Guard
Xy$
0.00
Haha, I've been gone a bit but time to answer questions.
@MinisterJay , Been busy with misc. projects clearing up my schedule by getting ahead so development actually started this week.
My current deadline is to complete a unit tested implementation of Minimax with Alpha Beta Pruning and Expectimax in RGSS3 by Wednesday.
After I'll try to finish my abstraction model for the system state by Friday. Basically a simulation that runs parallel to the actual game running. With any luck I'll have a basic version of it running sometime next week.

@Macro , that kind of action is actually the point of the Minimax algorithm. Basically given a group of actions it will choose among a selection of options to find the best option.
Each of these options are evaluated based off the opponent screwing you over.
When the opponent screws you over it's evaluating it's action based on your best move from there..

Think of it like thinking moves ahead in chess.



EDIT: UPDATE
Here's my current progress.
Code:
#=====================================================================================================================================
#=====IMPLEMENTATIONS=================================================================================================================
#=====================================================================================================================================
#=====================================================================================================================================
# Encapsulates the abstract behaviour of an agent.
class Agent
    attr_accessor :index
    attr_accessor :actions
    attr_accessor :searchDepth
    def initialize(actionList=[],index=0,searchDepth=0)
        # An agent has an index. This is its ID.
        self.index = index
        # An agent has a list of action. This is its actuators 'A' in the P.E.A.S. model.
        self.actions = actionList
        # The maximum depth a search is allowed to go by the agent.
        self.searchDepth = searchDepth
    end
end

# Encapsulates an action.
class Action
    attr_accessor :parameters
    attr_accessor :evalFunction
    def initialize(p={},evaluationFunction=lambda{|x,y| 0})
        # An action may have various parameters such as Cooldown, Damage, and Reach
        self.parameters = p
        # An action will have an evaluation function. This may utilize its parameters or the game environment.
        # Note that what is held is a lambda. For more diverse behavior the lambda should midirect into a function.
        self.evalFunction = evaluationFunction
    end
    def evaluate(gameState)
        # Note that the gameState is usable here. This is the sensors 'S' in the P.E.A.S. model.
        return self.evalFunction.call(gameState,self)
    end
end

# Encapsulates a representation of the game state.
class GameState
    # The agents exist in a world with state. This is the Environment 'E' in the P.E.A.S. model.
    attr_accessor :agents
    attr_accessor :world
    attr_accessor :successorFunction
    def initialize(successor = nil, agents = [], world = nil, successorFunction = lambda{|x,y,z| 0})
        # A game state should be capable of producing a successor state given an action.
        if successor != nil
            self.agents = successor.agents
            self.world = successor.world
            self.successorFunction = successor.successorFunction
        else
            # A game state should have agents capable of action.
            self.agents = agents
            # A game state should have a representation of the world.
            self.world = world
            # A game state should be capable of producing a successor.
            self.successorFunction = successorFunction
        end
    end
    # Returns the successor of the Game State given an agent and its action.
    def getSuccessor(agent,action)
        return self.successorFunction.call(self,agent,action)
    end
end

#=====================================================================================================================================
#==============UNIT TESTS=============================================================================================================
#=====================================================================================================================================
#=====================================================================================================================================

def actionTestUnit()
    return false unless Action.new.evaluate(1) == 0
    return false unless Action.new({},lambda{|x,y| 1}).evaluate(1) == 1
    return false unless Action.new({1=>"hello"},lambda{|x,y| y.parameters[1]}).evaluate(1) == "hello"
    return true
end

def testUnitAgent()
    a = Agent.new
    return false unless a.index == 0 && a.actions == []
    a = Agent.new([])
    return false unless a.index == 0 && a.actions == []
    a = Agent.new([],1)
    return false unless a.index == 1 && a.actions == []
    a = Agent.new([1])
    return false unless a.index == 0 && a.actions == [1]
    a = Agent.new([1,2,3],12382318123912389182312938)
    return false unless a.index == 12382318123912389182312938 && a.actions == [1,2,3]
    return true
end

def gameStateUnitTest()
    return false unless GameState.new.getSuccessor(1,1) == 0
    return false unless GameState.new(GameState.new(nil,[1,2])).agents == [1,2]
    return true
end

def runTests()
    print("GameState Test:",gameStateUnitTest(),"\n")
    print("Agent Test:",testUnitAgent(),"\n")
    print("Action Test:",actionTestUnit(),"\n")
end

runTests()

#=====================================================================================================================================
#=============INCOMPLETE==============================================================================================================
#=====================================================================================================================================

# IN NEED OF RIGOROUS UNIT TESTING DO FIRST
# score gain/loss and some factor based on pellet distance? Actually grabbing the pellet can be prioritized, but not over death.
def unitTestEvaluation(gameState,action)
    # Grab the Player Agent
    player = gameState.agents[0]
    #Find the Player Position
    iterator = 0
    pos = nil
    while pos == nil
        pos = iterator if(gameState.world[iterator][1] == true)
        iterator+=1
    end 
 
    # Look into the future the result of taking this action.
    testState = unitTestSuccessorFunction(gameState,player,action)
    # Grab future score.
    score = testState.score

    # Get the minimum distance to a pellet.
    iterator = 0
    minDistance = 10000
    while iterator != gameState.world.size
        if gameState.world[iterator][0] == true
            minDistance = [(pos-iterator).abs,minDistance].min
        end
        iterator +=1
    end
 
    # Return the score and the reciprocal of the minimum distance.
    return score + 1/minDistance
 
end

# IN NEED OF RIGOROUS UNIT TESTING DO SECOND
# Calculates Resulting Points and new positions. Player death is prioritized over victory.
def unitTestSuccessorFunction(gameState,agent,action)
    # determine if friendly or evil
    id = agent.index
    pos = nil
    iterator = 0
    # Get the position of the agent.
    while pos == nil
        pos = iterator if (id == 0 && gameState.world[iterator][1] == true) || (id == 1 && gameState.world[iterator][2] == true)
        iterator+=1
    end
 
    # MOVE THE AGENT
    if action["name"] == "right"
        ((gameState.world[pos][1] = false;gameState.world[pos+1][1] = true;pos+=1) unless pos == gameState.world.size) if id == 0
        ((gameState.world[pos][2] = false;gameState.world[pos+1][2] = true;pos+=1) unless pos == gameState.world.size) if id == 1
    end
    if action["name"] == "left"
        ((gameState.world[pos][1] = false;gameState.world[pos-1][1] = true;pos-=1) unless pos == 0) if id == 0
        ((gameState.world[pos][2] = false;gameState.world[pos-1][2] = true;pos-=1) unless pos == 0) if id == 1
    end
 
    #determine score stuff!
    gameState.score -=1 # -1 point for taking a step.
    if gameState.world[pos][1] == true && gameState.world[pos][1] == true
        # Dying is a big loss.
        gameState.score-=10
        return gameState
    end
    # If the player stepped on a pellet add some points.
    gameState.score += 5 if gameState.world[pos][0] == true && id == 0
    # If it's a victory add a bunch of points.
    pellets = 0
    gameState.world.each{|x| pellets +=1 if x[0]==true}
    if pellets == 0
        gameState.score += 100
    end
    return gameState
end

# IN NEED OF RIGOROUS UNIT TESTING DO THIRD
class UnitTestWorld < GameState
    attr_accessor :score
    def initialize
        self.score = 0 # Performance measure.
        evaluationFunction = lambda{|x,y| unitTestEvaluation(x,y)} # Evaluation Function
        moveLeft  = Agent.new({"name"=>"left" },evaluationFunction) # Action
        moveRight = Agent.new({"name"=>"right"},evaluationFunction) # Action
        goodGuy = Agent.new([moveLeft,moveRight],0,5)    # Friendly Agent
        badGuy  = Agent.new([moveLeft,moveRight],1,5)    # Unfriendly Agent
        agents  = [goodGuy,badGuy]
        # [[pellet,player,enemy]..]
        world = [[true,false,false],[true,false,false],[false,true,false],[false,false,true],[false,false,false],[false,false,false]]
        successorFunction = lambda{|x,y,z| unitTestSuccessorFunction(x,y,z)} # Successor Function 
        super(nil, agents, world, successorFunction)
    end
end

So yeah, still getting the environment itself together. But the unit testing will pay off as I can have a sense of security diving into the RPG Maker code and hooking it in.
Here's how the Unit Test World works..
The Unit Test World is a small world represented by a 6 element array
[pellet,pellet,player,enemy/pellet,empty,empty]

The enemy will wait 7 moves before moving to the right.
The hero's goal is to collect all the pellets.
The hero has 0 points to begin with and loses 1 per move. Winning nets 100 while losing costs 10. Grabbing a pellet is worth 5.

Minimax should make the following moves: Left -> Left -> Right -> Right -> Right
Assuming the enemy is perfect it will kill itself in order to maximize its score.

Expectimax on the otherhand should do the following: Left -> Left -> Right -> Right -> Left -> Right->Left->Right->Right
Seeing the potential 100 points as worth the risk it will hold out.

UPDATE 2:
I was given a surprise project for another class so an unexpected delay. I'll still try to finish the tested modules by the weekend including expectimax/minimax.
[doublepost=1461802208,1461021273][/doublepost]Bwahahaha, double posting to alert the few who are interested.
The algorithms are tested (finally), I was encountering a bug that refused to surface but realized I didn't account for the possibility of a tie between actions.
It works exactly as predicted. : ) So I can now move on to hooking it into RPG Maker.
Here's the code for those interested/crazy enough to event it.
Ruby:
#=========================================================================================
# Encapsulates the abstract behaviour of an agent.
#=========================================================================================
class Agent
    attr_accessor :index
    attr_accessor :actions
    attr_accessor :searchDepth
    def initialize(actionList=[],index=0,searchDepth=0)
        #=========================================================================================
        # An agent has an index. This is its ID.
        #=========================================================================================
        self.index = index
        #=========================================================================================
        # An agent has a list of action. This is its actuators 'A' in the P.E.A.S. model.
        #=========================================================================================
        self.actions = actionList
        #=========================================================================================
        # The maximum depth a search is allowed to go by the agent.
        #=========================================================================================
        self.searchDepth = searchDepth
    end
end
#=========================================================================================
# Encapsulates an action.
#=========================================================================================
class Action
    attr_accessor :parameters
    attr_accessor :evalFunction
    def initialize(p={},evaluationFunction=lambda{|x,y| 0})
        #=========================================================================================
        # An action may have various parameters such as Cooldown, Damage, and Reach
        #=========================================================================================
        self.parameters = p
        #=========================================================================================
        # An action will have an evaluation function. This may utilize its parameters or the game environment.
        # Note that what is held is a lambda. For more diverse behavior the lambda should midirect into a function.
        #=========================================================================================
        self.evalFunction = evaluationFunction
    end
    def evaluate(gameState)
        #=========================================================================================
        # Note that the gameState is usable here. This is the sensors 'S' in the P.E.A.S. model.
        #=========================================================================================
        return self.evalFunction.call(gameState,self)
    end
end
#=========================================================================================
# Encapsulates a representation of the game state.
#=========================================================================================
class GameState
    #=========================================================================================
    # The agents exist in a world with state. This is the Environment 'E' in the P.E.A.S. model.
    #=========================================================================================
    attr_accessor :agents
    attr_accessor :world
    attr_accessor :successorFunction
    def initialize(successor = nil, agents = [], world = nil, successorFunction = lambda{|x,y,z| 0})
        #=========================================================================================
        # A game state should be capable of producing a successor state given an action.
        #=========================================================================================
        if successor != nil
            self.agents = successor.agents.clone unless successor.agents.nil?
            self.world = successor.world.clone unless successor.world.nil?
            self.successorFunction = successor.successorFunction.clone  unless successor.successorFunction.nil?
        else
            #=========================================================================================
            # A game state should have agents capable of action.
            #=========================================================================================
            self.agents = agents
            #=========================================================================================
            # A game state should have a representation of the world.
            #=========================================================================================
            self.world = world
            #=========================================================================================
            # A game state should be capable of producing a successor.
            #=========================================================================================
            self.successorFunction = successorFunction
        end
    end
    #=========================================================================================
    # Returns the successor of the Game State given an agent and its action.
    #=========================================================================================
    def getSuccessor(agent,action)
        return self.successorFunction.call(self,agent,action)
    end
end
#=========================================================================================
# Cry, so much work on this. Pass by reference issues, pass by value issues.
# Getting Ruby to do what I want it to do.
# Applies the minimax algorithm to the agent behaviour.
#=========================================================================================
def minimax(agent,gameState,depth,initialIndex)
    bestScore = -1.0/0.0                               # Default best(worst) result is a catastrophic failure.
    bestAction = Action.new({"name"=>"doNothing"})        # Default best(worst) action is to stand still.
    alpha = -1.0/0.0                                  # Alpha
    beta  = 1.0/0.0                                      # Beta
    #For each action...
    for action in gameState.agents[initialIndex].actions.reverse
        #=========================================================================================
        # ... get the minimax value of each action..
        # Uses some Evil magic to get around ruby specific copying problems.
        #=========================================================================================
        lookAhead = eval(gameState.class.to_s+".new(gameState.getSuccessor(gameState.agents[initialIndex], action))")
        minimax = minValue(lookAhead, depth, alpha, beta, (initialIndex+1)%2, action)
        #=========================================================================================
        # Keep track of every better outcome.
        #=========================================================================================
        if minimax > bestScore
            bestScore = minimax
            bestAction = action
        end
        #=========================================================================================
        # Ignore children when we are bigger than beta.
        #=========================================================================================
        if bestScore > beta
            break
        end
        #=========================================================================================
        # Update alpha
        #=========================================================================================
        alpha = [alpha,bestScore].max
    end
    #=========================================================================================
    # Return the best action.
    #=========================================================================================
    return bestAction
end


def minValue(gameState, depth, alpha, beta ,index, curraction)
    #=========================================================================================
    #If we're as deep as can go then return our result.
    #=========================================================================================
    if terminalTest(gameState,depth)
        val = curraction.evaluate(gameState)
        return val
    end
    #=========================================================================================
    # Our default worst result is an infinitely good success.
    #=========================================================================================
    worstResult = 1.0/0.0
    #=========================================================================================
    # For each action the agent can make...
    #=========================================================================================
    for action in gameState.agents[index].actions.reverse
        #=========================================================================================
        # Look ahead
        #=========================================================================================
        lookAhead = eval(gameState.class.to_s+".new(gameState.getSuccessor(gameState.agents[index], action))")
        worstResult = [worstResult, maxValue(lookAhead, depth-1, alpha, beta,(index+1)%2,action)].min
        #=========================================================================================
        # Ignore children who are smaller than alpha.
        #=========================================================================================
        if worstResult < alpha
            break
        end
        #=========================================================================================
        # Update beta
        #=========================================================================================
        beta = [beta,worstResult].min
    end
    return worstResult
end

def maxValue(gameState, depth, alpha, beta ,index, curraction)
        #=========================================================================================
        #If we're as deep as can go then return our result.
        #=========================================================================================
        if terminalTest(gameState,depth)
            val = curraction.evaluate(gameState)
            return val
        end
        #=========================================================================================
        # Our default best result is an infinity bad failure.  
        #=========================================================================================
        bestResult = -1.0/0.0
        #=========================================================================================
        # For each action the agent can make...
        #=========================================================================================
        for action in gameState.agents[index].actions.reverse
            #=========================================================================================
            # Take the best result from our branches.
            #=========================================================================================
            lookAhead = eval(gameState.class.to_s+".new(gameState.getSuccessor(gameState.agents[index], action))")
            bestResult = [bestResult, minValue(lookAhead, depth, alpha, beta, (index+1) %2, action)].max
            #=========================================================================================
            # Ignore children when we are bigger than beta.
            #=========================================================================================
            if bestResult > beta
                break
            end
            #=========================================================================================
            # Update alpha
            #=========================================================================================
            alpha = [alpha,bestResult].max
        end
        #=========================================================================================
        # Return the best result.
        #=========================================================================================
        return bestResult
end

#=========================================================================================
# Applies the expectimax algorithm to the agent behaviour.
#=========================================================================================
def expectimax(agent,gameState,depth,initialIndex)
    bestScore = -1.0/0.0                                # Our  best action is initially nothing.
    bestAction = Action.new({"name"=>"doNothing"})        # Default best(worst) action is to stand still.
    for action in gameState.agents[initialIndex].actions # For Each action...
        #=========================================================================================
        # Look ahead and grab our expectimin value from the other agent.
        #=========================================================================================
        lookAhead = eval(gameState.class.to_s+".new(gameState.getSuccessor(gameState.agents[initialIndex], action))")
        expmaxVal = expectiminValue(lookAhead,depth,(initialIndex+1)%2,action)
        #=========================================================================================
        # If this is a new best probabilistic value then take this action instead.
        #=========================================================================================
        if expmaxVal > bestScore
            bestScore = expmaxVal
            bestAction = action
        end
    end
    #=========================================================================================
    # Return the best action we can.
    #=========================================================================================
    return bestAction
end

def expectiminValue(gameState, depth ,index, curraction)
    #=========================================================================================
    # Check if we're done.
    #=========================================================================================
    if terminalTest(gameState,depth)
        #=========================================================================================
        # If so evaluate the worth of the current action.
        #=========================================================================================
        return curraction.evaluate(gameState)
    end
    #=========================================================================================
    # Initialize average value to be 0.
    #=========================================================================================
    expVal = 0
    for action in gameState.agents[index].actions            # For Each action...
        #=========================================================================================
        # Look ahead and sum up the values for the actions we can take.
        #=========================================================================================
        lookAhead = eval(gameState.class.to_s+".new(gameState.getSuccessor(gameState.agents[index], action))")
        expVal += expectimaxValue(lookAhead,depth-1,(index+1)%2,action)
    end
    #=========================================================================================
    # Average our resulting sum across our actions
    #=========================================================================================
    return expVal/gameState.agents[index].actions.size.to_f
end

def expectimaxValue(gameState, depth ,index, curraction)
    #=========================================================================================
    # If we're done, that's that.
    #=========================================================================================
    if terminalTest(gameState,depth)
        return curraction.evaluate(gameState)
    end
    #=========================================================================================
    # Initialize our best value to be negative infinity.
    #=========================================================================================
    bestResult = -1.0/0.0
    for action in gameState.agents[index].actions        # For each action...
        #=========================================================================================
        # Look ahead and take the best result out of possible actions.
        #=========================================================================================
        lookAhead = eval(gameState.class.to_s+".new(gameState.getSuccessor(gameState.agents[index], action))")
        bestResult = [bestResult,expectiminValue(lookAhead,depth-1,(index+1)%2,action)].max
    end
    #=========================================================================================
    # Return the best result.
    #=========================================================================================
    return bestResult
end
#=========================================================================================
# Misdirection into minimax
#=========================================================================================
$miniwrapper = lambda{|v,x,y,z| minimax(v,x,y,z) }
#=========================================================================================
# Misdirection into expectimax
#=========================================================================================
$expectiwrapper = lambda{|v,x,y,z| expectimax(v,x,y,z) }

#=========================================================================================
# Needs to be implemented in some way, checks if the simulation is over or max depth reached.
#=========================================================================================
def terminalTest(gameState,depth)
    #=========================================================================================
    # Check if player is dead.
    #=========================================================================================
    pos = nil
    iterator = 0
    while pos == nil
        pos = iterator if(gameState.world[iterator][1] == true)
        iterator+=1
    end  
    if gameState.world[pos][1] && gameState.world[pos][2]
        return true
    end
    #=========================================================================================
    # Check if player is win.
    #=========================================================================================
    pellets = 0
    gameState.world.each{|x| pellets +=1 if x[0]==true}
    if pellets == 0
        return true
    end
    #=========================================================================================
    # Check if we're too stupid to look further.
    #=========================================================================================
    if depth <= 0
        return true
    else
        return false
    end
end
[doublepost=1462172744][/doublepost]Whahaha triple post! Merging into the last one!
Just sending the Alert that I placed the demo into the first post.
Hope you all enjoy it as simple as it is.
 
Top