Advances in artificial intelligence are heating up. In reality, AI has been the talk of sci-fi since the first Terminator movie came out in 1984. These movies present an example of something called "Artificial General Intelligence." So how close are we to that?
Artificial consciousness brings us to a more ethical discussion of AGI. Can a machine ever achieve consciousness in the same way humans can? And if it could, would we need to treat it as a person?
Scientifically speaking, consciousness comes directly from biological input being interpreted and reacted to by a biological creature, such that the creature becomes its own thing. If you remove the clarifying word of "biological" from that definition, then it's not hard to see how even existing AIs could already be considered conscious, albeit stupidly conscious.
One thing that defines human consciousness is the ability to recall memories and dream about the future. In many aspects, this is a uniquely human capability. If a machine could do this, then we might define it as having artificial general intelligence.
So, is a machine or algorithm like this ever possible, and if so, how far off is it?
In theory, everything we just talked about an AI doing is possible. It's just not highly practical with the technology we have now. The processing power to essentially create the human brain is enormous, but quantum computing might be our gateway to successfully creating artificial general intelligence.