The core mechanism of using machine learning to "solve" games is building in an incentive factor that the machine learning algorithm will use to prioritize and identify the best values for the parameters it can control. For my experiments I think I will need some external pure incentive and disincentive stimulus that can be used for any early assisted learning stages that are required. This is because I expect the learning process that will connect actuator parameters with movement results will be full of dead ends, and I want to be able to try to back the ML algorithm out of them and encourage the patterns that look more productive.
However, it seems relevant to mention the unfortunate babbling idiocy that has eventually forced me to erase and restart all of the early (local brains only) voice to text translators I tried.
Caveat creator. Do I need a kill switch?
No comments:
Post a Comment
Do this