The concept of Artificial Intelligence (AI) has been around since long before the flashing lights and semi-conductor electronics of modern computers. In ancient times certain inanimate objects, such as towering mountains of stone, or the winding paths of river beds, were bestowed with a mystical property of divine inspiration and ongoing power to influence cultural affairs.
In modern times we are, for the first time, now face to face with devices of a far less inanimate nature. That is, they are far more animated than ever before. A personal computer (PC) can now pay quite a bit of personal attention to what it observes is being done or said within its general vicinity, to varying degrees of proximity. The new phrases used for these fields of symbiotic (or alternatively cybernetic) human-machine endeavor are names like machine vision and speech recognition, or more broadly machine learning.
As the AI becomes more animated, it would seem that there is a corresponding change in the dynamic interactions between Man and Machine. The older and simpler forms of AI offered less complexity (if any) and therefore less ability to negotiate our desired outcomes e.g. an offering and a prayer would be followed by a result sometime in the next harvest season. The newer and more complex forms of AI offer the promise to control our environment, or our conceptual plans, or indeed our fate with ever greater speed and accuracy. Inputs are provided and can be calibrated, verified, and a measure of progress toward a result is merely a matter of the algorithm’s design. The AI will perform in alignment with the Designer’s Intent.
When reviewing the state of the art for modern AI, is it possible to determine what our collective Designer’s Intent should be? Should our algorithm designs strive for being stand-alone, monolithic, and resilient structures? Or should they be modular, scalable, and malleable structures that can respond to ad-hoc and emergent demands? It seems likely that just as water finds its course, the results of ground-breaking research will simply forge the path of least resistance between these two alternative design paradigms. Nevertheless some speculation on the topic might help manage resources in the shorter term.
There seems to be a tendency for the most innovative research breakthroughs to consist of the latter style of structure, those that are more modular and adaptive to the constraints of a changing environmental (operating) system. For example, the research paradigms used by the recent Microsoft advancement in speech recognition include the use of so called long short-term memory (LSTM) neural networks.
Without going into the specific details of these approaches, they are both reliant on a kind of small kernel of logic that is scaled up in a parallel fashion across millions, if not billions of channels. A curious, but perhaps pivotal characteristic to note about these approaches is that every year, their architecture seems to mirror the somewhat counter-intuitive wiring diagram of the human brain more and more. In turn, the network of human minds we live in is simply the society that surrounds us! It is the community of family and friends, work colleagues, and perhaps occasional foes, that is all but unavoidable in our modern concept of civilization. In a business sense, it may yet transpire that it will take a true ecosystem of software developers, consumer advocacy groups, and somewhat risk ardent investors to discover our collective Designer’s Intent. How much of ourselves do we wish to see in the next gen of AI? Does it take a village to raise the next gen of AI in our image? Given the likely exponentially adaptive and scalable nature of the next gen of AI, might it be wise to do so in any case, towards an amicable relationship between Man and Machine in an increasingly cybernetic future... or past - after all there is an increasing volume of evidence that we live in a giant computer simulation.