Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
This reinforcement learning system uses Q-learning, a popular algorithm in reinforcement learning. Here's a breakdown of how it works:
The system maintains a Q-table (`_qTable`) that stores the expected rewards (Q-values) for each state-action pair.
The state is determined by the AI unit's health, ammo, and nearby enemies.
Actions include "attack", "defend", "flank", and "retreat".
The system uses an epsilon-greedy policy to balance exploration (trying new actions) and exploitation (using known good actions).
After each action, the system calculates a reward based on the outcome and updates the Q-value for that state-action pair.
Over time, the AI learns which actions are most beneficial in different situations.
To integrate this into your mod:
Call `fnc_initAILearning` for each AI unit you want to apply reinforcement learning to.
Implement the action functions (`fnc_performAttack`, `fnc_performDefend`, etc.) to make the AI execute the chosen actions.
Implement the reward calculation functions (`fnc_enemyKilled`, `fnc_tookDamage`, etc.) to evaluate the outcomes of actions.
Adjust the learning parameters (`_learningRate`, `_discountFactor`, `_explorationRate`) to fine-tune the learning process.
Consider persisting the Q-table between game sessions to allow for long-term learning.
This system will allow us AI to adapt its behavior based on the outcomes of its actions, potentially leading to more dynamic and challenging opponents. Reinforcement learning can take time to converge on optimal strategies, so I may need to run many iterations or pre-train the AI before deploying it in a mod. and with your's commanding a Squad etc using your voice would be out of this world lol.
The voice command thing is neat as hell, there's a dude that made a mod for Arma 3 that utilised Windows built in voice service, but I think you'd want a platform agnostic solution to really make an impact, that's going to be tricky but hell with AI these days it seems anything is possible. And this being a milsim game you don't really have a huge scope of commands you're dealing with, so that's an upside at least.
Bohemia should encourage this bleeding edge AI modding and make their play data public so people can train models off it.
EDIT: Made a request to them to both make trainable data available and maybe get a basic in-game voice command function going so modders can hook it up to any NPC AI mods (I think the voice command thing will play a big part in the near future.)
Just imagine it, you're on a 16 player server, all players are squad leaders, leading realistic, experienced (trained) AI soldiers and assets, all naturally responding to voice commands. Bro this could be bigger than the "Day-Z" effect that propelled Arma into the mainstream back in the day.
https://www.youtube.com/watch?v=dm_yY-hddvE
KAI utilized a scoring method for context.