The US Air Force has a penchant for officer development, but the "general" he is currently working on has no star on his uniform: it's General Artificial Intelligence (GAI).
The term GAI refers to an artificial intelligence with human level or higher cognition. Basically, when people say that artificial intelligence today is not a "real AI", they confuse the terminology with that of GAI: machines that think.
At the heart of the vast cavernous expanses of the US Air Force research labs, a scientist named Paul Yaworsky tirelessly strives to make American aircraft intelligent beings doomed to destruction. Or maybe he's trying to bring the coffee maker to life, we do not really know his final game.
What we know comes from a pre-published research article on ArXiv that was simply asking for a hyperbolic title. Maybe "the US Air Force is developing robots that can think and commit murder," or something like that.
In fact, Yaworksy's work seems to lay the foundation for a future approach to general intelligence in machines. It proposes a framework to bridge the gaps between the common AI and the IGA.
According to the paper:
We correct this gap by developing a model of general intelligence. To do this, we focus on three basic aspects of intelligence. First, we must understand the general order and the nature of intelligence at a high level. Second, we need to know what these achievements mean with respect to the overall intelligence process. Third, we must describe these achievements as clearly as possible. We propose a hierarchical model to help capture and exploit order in the intelligence.
At the risk of spoiling the end for you, this article offers a hierarchy for understanding intelligence – a roadmap for machine-learning developers to pin over their desk, if you want – but it does not contain any buried algorithm your Google assistant in Star Trek data.
Interestingly, there is currently no accepted or understood path to GAI. Yaworsky discusses this dissonance in his research:
Maybe the right questions have not been asked yet. An underlying problem is that the intelligence process is not well enough understood to allow enough hardware or software models, to say the least.
In order to explain intelligence in a way that is advantageous for artificial intelligence developers, Yaworsky breaks it down into a hierarchical view. His work is early and the explanation of his research on high-level intelligence goes beyond the purpose of this article (for a deeper dive: here is the white paper), but this Is a trajectory as satisfactory as we have seen for the continuation of GAI.
Related: A machine to govern them all: a 'master algorithm' can appear sooner than you think
If we can understand the functioning of high-level human intelligence, we will greatly contribute to informing computer models for GAI.
And if you browse this article to find out if the US military is about to unwittingly lose an army of killer robots in the near future, here's a quote from the newspaper to dispel your concerns:
What about the concerns of AI that is unleashed and takes hold of mankind? It is believed that artificial intelligence will one day become a very powerful technology. But as with any new technology or capability, problems tend to arise. Especially with regard to the general AI or artificial general intelligence (AGI), there is a huge potential, both for good and for bad.
We will not go into hype or speculation, but suffice it to say that many of the problems we hear about today about AI are due to some cursory predictions. intervene intelligence. Not only is it difficult to make good scientific predictions in general, but when the science in question involves intelligence itself, as in the case of AI, it is almost impossible to make predictions correctly. Again, the main reason is that we do not understand enough intelligence to allow accurate predictions. In any case, what we must do with AI is to proceed with caution.
eToro Launches Cryptocurrency Portfolio for Android and iOS