Image Colorization Based on Modified RefineNet
Monte Carlo Tree Search
I use the Unity game engine environment to implement the Monte Carlo Tree Search algorithm for playing Connect 4.
You will notice two public variables of the Interface.cs script attached to the Board object in addition to the prefab links and the human Boolean: Max Train Per Frame and Max Train Threshold. Max Train Per Frame is the maximum search cycles MCTS will perform in the background per frame. The Max Threshold is the number of training iterations MCTS will perform before walking its training from Max Train to 1. The default values are 100 and 3,000,000. If your computer runs slow when you first start the game, try setting the max train down to 50 or 25. 3,000,000 is a decent goal to hit for training. If your computer is running very slow after lots of play, you can decrease the training threshold as well. Your “AI” is an MCTS model that saves a table of win-loss ratios, based on its search and rollouts, that persists in memory from the time you start the game. Your “naïve baseline” is just another MCTS agent that is created fresh at each frame so its decision is based solely off a single random rollout. After playing for 25-50 games, you should see a win ratio of around 65-75% for the persistent MCTS agent against the naïve baseline.
Github link: whisperers26/AI-work (github.com)