Artificial intelligence for games Essay Example
Artificial intelligence for games Essay Example

Artificial intelligence for games Essay Example

Available Only on StudyHippo
  • Pages: 16 (4176 words)
  • Published: December 15, 2017
  • Type: Research Paper
View Entire Sample
Text preview

The literature review aims to provide additional information on various aspects of game AAA, including system overview, design, game theory, game genres, trends, current issues in the gaming industry, and implementation of AAA in gaming. The field of game AAA has been present since the earliest video games were created in the 1970s. Game AAA is constantly evolving and setting new standards for computer hardware engineering while establishing benchmarks for future game industry production. In recent years, advanced game AAA titles have emerged and dominated the market, offering more entertainment value than earlier video games. With improvements in AD rendering hardware and the adoption of high-resolution graphics as an industry standard, game AAA has become a crucial factor in determining a game's success. Dedicated programmers specializing in game AAA are now essential members of core design teams. Programming game AAA poses significant ch

...

allenges for developers, and an increasing number of presentations at the annual Game Developer Conference focus on AAA techniques.

The potential of academic AAA research and game AAA technologies has been demonstrated by the real-time performance requirements of computer name AAA and the demand for womanlike interactions, appropriate animation sequences, and internal state simulations for populations of scripted agents. The commercial success of licensed game engines like Unreal Tournament has inspired a genre of First Person Shooter designs that incorporate increasingly sophisticated and expert agent behaviors. The bots of Epic Games' Unreal Tournament are well-known for their scalability and tactical excellence. The hardware performance capabilities and constraints of boot support have been a persistent bottleneck for advanced game AAA creation. Real-time graphics rendering has traditionally consumed a significant amount of CPU resources

View entire sample
Join StudyHippo to see entire essay

leaving insufficient time and memory for game AAA and collision detection. Some essential AAA problems, such as pathfinding, require adequate processor resources. The potential for computer games as a tool for AAA research and education continues to grow. Well-designed games require great intelligence and good strategy. Designing an agent that plays the game is a challenging task, making it an ideal context for practicing AAA algorithms. Many game-based tools for CSS and AAA educators have already been presented by McGovern et al. and Dinner et al.

Various Java-Based games, including Intelligence, have been proposed for use in CSS education. However, these games do not offer a centralized platform for managing multiple games. Users can only develop and test their game agents offline, making it challenging to compare their agents with others'. Additionally, these tools do not provide a uniform set of interfaces for educators to design different games for teaching various algorithms.

To address these limitations, the Betony Intelligent Agent Platform has been created. It serves as an online turn-based strategy game playing system. Computer Science students can create game agents and participate in contests with others on this platform. Through this process, they can acquire basic programming skills and gain insights into various Artificial Intelligence algorithms. Currently, the availability of such platforms for educators and researchers interested in turn-based strategy game development is limited.

Another similar platform, MUMPS, exists but it only supports normal-form games, which represent a small subset of turn-based strategy games. In turn-based strategy games, players take turns while playing, unlike real-time strategy games where all players participate simultaneously. This genre of games, such as chess and bridge, is easy for students to comprehend.

Furthermore, depending on the complexity of the games, they can be applied in various educational courses.The text describes the versatility and features of the Betony platform for educational purposes. It states that the platform is suitable for students at different educational levels, from undergraduates to graduates. The platform provides facilities and interfaces that allow users to design various competitive situations and evaluate different types of intelligent agents. The Betony has been successfully applied in introductory programming and advanced algorithm courses, where turn-based strategy games were developed to engage students in programming. This unique approach has increased student interest compared to traditional methods. The Betony has also been utilized in a large-scale live competition, where mediators develop agents for a custom game. The platform aids in selecting competitors with a deep understanding of AAA algorithms. The text further explores the structure, features, main games, and applications of the Betony. It concludes by inviting users to develop new games and request educational cooperation.The field of artificial intelligence (AAA) utilizes artificial intelligence techniques in video games to create the illusion of intelligence in non-player characters (Naps). These techniques are derived from existing methods in AAA. The term game AAA encompasses a wide range of algorithms that incorporate control theory, robotics, computer graphics, and computer science. Unlike traditional AAA, game AAA prioritizes the appearance of intelligence and gameplay, allowing for workarounds and cheats. The abilities of computer characters are often adjusted to ensure fairness for human players. For instance, in first-person shooter games, Naps' aiming skills are toned down to match human capabilities. Game playing has been a focus of AAA research since its inception. One notable early example

is the game "1942," created in 1942, which showcased advanced technology for its time and outperformed skilled human players. Additionally, in 1951, Christopher Strachey developed a checkers program using the Frantic Mark 1 machine at the University of Manchester, while Dietrich Print created a chess program. These programs were among the first computer programs ever written.Arthur Samuels developed a checkers program in the middle and early ass. Eventually, this program became skilled enough to challenge a respectable amateur player. In 1997, Garry Sparrow was defeated by Vim's Deep Blue computer, marking a significant moment in the development of checkers and chess games. During the sass and early sass, the first video games like Spaceward!, Pong, and Gotcha were created. These early games were implemented on discrete logic and focused on competition between two players, without AAA (artificially intelligent opponents). However, in the sass, games featuring a single player mode with enemies started to emerge. The arcade saw the introduction of notable games such as Speed Race, Awake, and Pursuit in 1974. Additionally, text-based computer games like Hunt the Wampum and Star Trek from 1972 also included enemies whose movement was based on stored patterns. The incorporation of microprocessors allowed for more computation and random elements in movement patterns. The golden age of video arcade games popularized the concept of AAA opponents, largely influenced by the success of Space Invaders in 1978. This game included increasing difficulty levels, distinct movement patterns, and in-game events dependent on hash functions derived from player input.

Gilligan (1979) expanded the complexity and diversity of enemy movements in video games, including individual enemies breaking out of formation. In Pace-Man (1980), maze

games introduced AAA patterns with unique personalities for each enemy. Karate Champ (1984) subsequently implemented AAA patterns in fighting games, although the poor AAA led to the release of a second version.

In First Queen (1988), a tactical action RPG, computer-controlled characters could be enthroned by the AAA. This concept was later introduced to the action role-playing game genre by Secret of Man (1993), which drew inspiration from Dragon Quest IV (1990), where users can adjust the AAA routines of non-player characters during battle.

Games like Madden Football, Earl Weaver Baseball, and Tony La Russia Baseball all aimed to replicate the coaching or managerial style of selected celebrities by basing their AAA on it. Madden, Weaver, and La Russia collaborated extensively with game development teams to ensure accuracy. Later sports titles allowed users to customize the AAA variables to create their own managerial or coaching strategies. The emergence of new game genres in the sass led to the utilization of formal AAA tools such as finite state machines.

Real-time strategy games posed challenges for AAA due to numerous factors such as a large number of objects, incomplete information, pathfinder issues, real-time decision-making, and economic planning. The initial games in the genre faced notable difficulties. For instance, Herzog Ewes (1989) had a nearly malfunctioning pathfinder and basic three-state state machines for unit control, while Dune II (1992) attacked players' bases directly and employed several cheats.

The implementation of AAA methods from the bottom-up approach was seen in games like Creatures or Black & White. These games focused on emergent behavior and evaluating player actions. Another example is Faded (interactive story), released in 2005, which primarily utilized interactive multiple-way

dialogs and had AAA as its main feature. Games have served as platforms for developing artificial intelligence with potential applications beyond gameplay. Watson, a computer that played Jeopardy, and the Robotic tournament where robots are trained to compete in soccer are notable examples.

There are purists who argue that the use of "AAA" in the term "game AAA" overstates its value. Unlike academic AAA, which involves fields like machine learning, decision making based on arbitrary data input, and the ultimate objective of strong AAA capable of reasoning, "game AAA" often relies on a few rules of thumb or heuristics that provide a satisfactory gaming experience.The increasing awareness of academic AAA by game developers and the growing interest in computer games from the academic community is causing the definition of AAA in games to become less idiosyncratic. However, there are still significant differences between different application domains of AAA, making game AAA a distinct subfield. One important distinction is the ability to solve AAA problems in games through cheating, which is not possible in other domains such as robotics. For example, inferring the position of an unseen object can be challenging in robotics, but in a computer game, an NAP can easily look up the position in the game's scene graph. While cheating can lead to unrealistic behavior and may not always be desirable, its possibility distinguishes game AAA and presents new problems to solve, such as determining when and how to use cheating. The Betony Intelligent Agent Platform is specifically designed for online turn-based strategy games, where players take turns instead of playing in real time. The use of game applications has a long tradition in

artificial intelligence due to their high variability and scalability for problem definitions within a restricted domain, as well as the ease of evaluating the results.The "AAA inside" feature is of high importance in the fast-growing electronic gaming market, which is worth billions of dollars. The revenue from PC games software alone matches that of box office movies. Many traditional games like Go-Mock and the Nine Men's Morris have recently been solved using AAA techniques. Deep Blue's victory over Sparrow was a significant milestone in this field. However, it is uncertain whether these techniques can be applied to modern computer games. Modern computer games primarily employ the techniques used in this field and its variants/extensions. These games present more complex problems for AAA than traditional games. AAA techniques have a wide range of applications in modern computer games. Although AAA does not always need to be personified, artificial intelligence in computer games is primarily associated with characters. These characters can be considered agents that perfectly fit the AAA agent concept. The intelligence of a game agent/character is perceived by the player through various dimensions, including physical characteristics, language cues, behaviors, and social skills.

Physical characteristics like attractiveness are more a concern for psychologists and visual artists. Language skills are not typically relevant for game agents and are also disregarded here. When evaluating an agent's intelligence, the most important aspect is the goal-directed component, which is examined in the remainder of this paper. In contemporary computer games, a common approach to implementing goal-directed behavior is to utilize predetermined behavior patterns. This is often accomplished through the use of simple if-then rules.

In more advanced approaches that involve

neural networks, behavior becomes adaptable, but the purely reactive nature has not yet been overcome. Many computer games address this challenge by allowing computer-guided agents to cheat. However, ensuring the credibility of an environment with cheating agents becomes increasingly difficult given the growing complexity and variability found in computer game environments. For example, imagine a scenario in which a player destroys a communication icicle in an enemy convoy to prevent them from communicating with headquarters.

If the game cheats to avoid simulating realistic behavior of characters and directly accesses the game's internal map information, the enemy's headquarters might still become aware of the player's subsequent attack on the convoy.

In artificial intelligence, intelligent agents (IA) are autonomous entities that observe using sensors and act on an environment using actuators. Their main goal is to achieve rationality. These agents may acquire knowledge or learn in order to accomplish their objectives. Intelligent agents can vary in complexity, ranging from a simple reflex machine like a thermostat to a human being or a community of individuals working together towards a common goal. There are five classes in which intelligent agents can be categorized based on their level of perceived intelligence and capability: simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Simple reflex agents make decisions solely based on the current percept and ignore the history of previous percepts. Their decision-making is based on condition-action rules, where an action is triggered if a certain condition is met. These agents only work effectively when the environment is fully observable. Some reflex agents can also store information about their current state, allowing them to disregard conditions for

actions that have already been initiated. It is common for simple reflex agents operating in partially observable environments to encounter infinite loops. However, model-based agents are able to handle such environments.The current state of the agent is stored internally, which includes a structure that represents the unseen part of the world. This knowledge is known as a model of the world, making it a model-based agent. This type of agent maintains an internal model that reflects some of the unobserved aspects of the current state, based on its percept history. It makes decisions in the same way as a reflex agent.

Goal-based agents, on the other hand, go beyond model-based agents by utilizing goal information. This goal information describes desirable situations and allows the agent to select among multiple possibilities in order to reach a goal state. Within artificial intelligence, search and planning are subfields dedicated to finding action sequences that achieve an agent's goals. Although goal-based agents may appear less efficient in some cases, they are more flexible because their decision-making knowledge is explicitly represented and can be modified.

In addition to goal-based agents, utility-based agents exist. These agents can differentiate between goal states and non-goal states but also assess how desirable a particular state is. This assessment is accomplished through a utility function that assigns a measure of utility to each state.A more general performance measure should allow comparison of different world states based on the level of happiness they would bring to the agent. The term "utility" can be used to describe the degree of happiness experienced by the agent. An agent that is rational and utility-based will choose actions that maximize the

expected utility of the outcomes. In other words, the agent expects to achieve the highest average happiness given the probabilities and utilities associated with each outcome.

A utility-based agent needs to model and keep track of its environment, which involves extensive research on perception, representation, reasoning, and learning. Learning agents have the advantage of being able to operate in unknown environments initially and improve their competence beyond their initial knowledge. The key distinction lies between the "learning element," which focuses on making improvements, and the "performance element," which is responsible for selecting external actions.

The learning element receives feedback from a "critic" on the agent's performance and determines how to modify the performance element for better future results. The performance element, which we previously considered as the entire agent, receives percepts and makes decisions on actions. The final component of a learning agent is the "problem generator," which suggests actions that can lead to new and informative experiences.According to additional sources, there are various types of sub-agents that can be found within an Intelligent Agent or can function as standalone Intelligent Agents. These include Decision Agents (focused on decision making), Input Agents (processing and making sense of sensor inputs, such as rural network based agents), Processing Agents (solving problems like speech recognition), Spatial Agents (relating to the physical real-world), and World Agents (combining all other types of agents to enable autonomous behavior).

Believable agents refer to agents that exhibit a personality. Physical Agents are entities that perceive through sensors and act through actuators. Temporal Agents use time-based stored information to provide instructions or data acts, such as in a car program or for a human being, and adjust

their behaviors based on program inputs.

Game theory, a branch of mathematics, focuses on studying interactions among rational and self-interested agents. While game theory has some overlap with computer science, it originated from the study conducted by economists during the 20th century, with earlier roots before that period.Game theory has become the main analytical framework in microeconomic theory, as seen in economics textbooks and Nobel prizes awarded to game theorists. Artificial intelligence started shortly after game theory and pioneers like von Neumann and Simon contributed to both fields. Both game theory and artificial intelligence use decision theory. However, AI focused on designing agents that act alone for most of its early years and had little need for game theory. In the late 1900s, game theory became important for computer scientists due to the interest in systems with computational barriers and the rise of distributed computing and the Internet, which required agents to reason and interact with others.The decision-theoretic approach, widely adopted by computer scientists, has been expanded upon and generalized by game theory. This fusion of computational approach and game theoretic models is known as Algorithmic Game Theory. This field has grown significantly in recent years and has a strong presence in major AAA conferences such as ACACIA, AAA, and MAMAS, as well as in journals like AU, AJAR, and JAMS. It also has three dedicated archival conferences: the ACM Conference on Electronic Commerce (ACM-CE), the Workshop on Internet and Network Economics (WINE), and the Symposium on Algorithmic Game Theory (SAT).

Algorithmic game theory should be distinguished from a broader research area within AAA called multi agent systems. While multi agent systems encompass most game-theoretic work within

AAA, it also includes non-game-theoretic topics such as software engineering paradigms, distributed constraint satisfaction and optimization, logical reasoning about other agents' beliefs and intentions, task sharing, argumentation, distributed sensing, and multi-robot coordination. Additionally, algorithmic game theory has gained attention outside of artificial intelligence.

The term "algorithmic game theory" first gained popularity in computer science theory and has since been used in other fields such as networking, security, and AAA research. While the term "multi-agent systems" is more commonly used, we argue that designating some AAA research as algorithmic game theory has advantages.

Using the label of algorithmic game theory highlights the similarities between AAA research and work by computer scientists in different areas, particularly theorists. It is important to maintain a connection between AAA research and this growing body of work for the benefit of researchers both inside and outside of AAA.

Additionally, within the field of multi-agent systems research, only a portion of it is game theoretic. Therefore, it makes sense to have a cohesive name for the subset of work that takes a game-theoretic approach. One might wonder how AAA work within algorithmic game theory differs from work in the theory community. While it is challenging to draw clear distinctions between these literatures, there are two key differences in the types of questions emphasized.

Firstly, researchers in algorithmic game theory within AAA often focus on practical reasoning about multi-agent systems.AAA research primarily focuses on expanding theoretical models to increase their realism, tackling larger problems through computational techniques, and addressing how agents should behave in competitive situations. Additionally, AAA has extensive studies in practical methods for solving computationally challenging problems, which have also been applied to game

theory. Algorithmic game theory within AAA emphasizes solving practical problems within resource limitations rather than viewing computational difficulty as insurmountable obstacles. In the field of Communication Theory, interferences in robot communication are often caused by the use of unlicensed ISM bands, which have specific constraints. To avoid interferences and aid in saving lives, it is recommended to utilize licensed frequencies. This approach significantly reduces the likelihood of interferences. Furthermore, in order to prevent one unit's signal from overpowering others, there can be constraints on the output power between the control unit and the robot. Another common cause of robot communication failures is the loss of signals between the robot and its control unit, mainly due to frequency issues.The higher the frequency, the smaller the antenna size will be due to wavelength being inversely proportional to frequency. However, transmission efficiency decreases at higher frequencies. Higher frequencies have the ability to penetrate denser materials compared to lower frequencies, but they are disadvantageous as small items such as dust particles resonate at high frequencies and absorb signal power. Therefore, it is best to use a frequency that optimizes radio communication by being in the middle range. Figure 2 illustrates the comparison of different factors taken into consideration. Consequently, it is decided to use UHF frequencies as they have relatively good signal penetration properties with a relatively low power output. Our system overview is available for public use on our servers. Users can register accounts on our website and log in to access the game's description and programming interfaces. Upon developing their agents, users can upload their agents' source code using a simple web form. They can then participate

in contests or engage in single matches with other agents on the platform.A match refers to a game played by multiple agents, where actions are taken in turns until the game ends. The outcome of the game reflects the agents' relative levels of intelligence. A contest, on the other hand, involves a series of organized matches with the aim of ranking the agents based on their intelligence levels. These matches can be organized using various competition systems like round robins and Swiss systems. For long-term projects like full-semester courses or large-scale competitions, multiple contests can be established. The Betony system comprises three parts: the fronted, storage system, and Judge system. The fronted is the Betony website and handles most interactions with users, such as registration, agent uploads, contest participation, and result viewing. The storage system stores executable files of agents, logs, and database information on users, agents, and contests. Lastly, the Judge system consists of the contest management module and the Judge module, where the former runs contests based on specific competition systems.

The Betony module determines the players for each match and then utilizes the Judge module to execute the match and document the logs, which include information on the winner, scores, and the match progression. Betony offers various features that enhance its entertainment value and educational benefits. Firstly, it provides a visual representation of the match process through FLASH animations, allowing users to replay specific matches using the log files. It has been observed that Betony users spend a significant amount of time watching these FLASH animations.

According to student feedback, Betony also generates increased interest in agent development and contest participation.

Additionally, users have the option to select opponents from a vast selection of agents available on the platform. This enables users to learn from others' strategies and enhance their own algorithms. To ensure privacy, Betony includes a function that allows users to set their agents as private, preventing them from being chosen as opponents by others.

Betony operates in a dynamic environment with multiple agents in a sequential and continuous manner. It is designed using the round-robin and Swiss system principles.

In the Round Robin approach, each player competes against all other contestants in turn. In contrast, the Swiss system requires that all players be paired to compete against each other in multiple rounds of competition. For new games to be run on the server, Judge, Control, Display, and Introduction files are necessary. Communication between the controller and Judge, as well as between the Judge and controller, is established using specific functions.

Problem Statement: Game software plays a crucial role in modern software engineering, but there is limited literature and understanding regarding the inner workings of AI in various game software. This study aims to identify and understand the components of AAA, how they operate, the environment they function in, the challenges they face, and the relationship between good AI and high-performance games. Techniques and principles of game AAA design are expected to lead to the next revolution in the gaming industry, which will involve learning and agent adaptation.

Developers have been actively pursuing and researching learning techniques to enhance game AAA. In order for AAA in a game to be believable, it must simulate cognition, accurately perceive the environment, and convincingly respond within that context. When

defining game AAA, programmers must code agent activity and behavior in a manner that makes characters appear intelligent and realistically react to perceived conditions and situations.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New