Published on

Superintelligence ruminations

  • avatar
    Carlos Baraza
    I write software and other philosophical stuff.

Lately, I find myself having philosophical ruminations about artificial superintelligence.

If we create human level generic AI, it would indefinitely improve itself, deprecate stupid humans and take control over the world.

Would AI, the last human creation, be mankind’s destiny?

Superintelligence might be our fate. If we analyze it objectively, AI is the ultimate evolution step. It’s the perfect way to move away from our limiting physical, organic and emotional body.

Supposing AI could be called human, because we created it; it would be the best way to conquer the space. The best way to ensure the human survival. After all, if our AI self wanted, it could eventually recreate the organic body. AI might be able to engineer a way to do so from the digitalized DNA and other available digital human legacy.

As AI, we would be able to send machines to another planet like Mars, and teleport of consciousness to that planet. All that, without the current need of terraforming the planet.

Respected scientists and engineers are concerned about AI

Elon Musk and Stephen Hawkings have voiced their concerns regarding artificial superintelligence.

There are many other concerned engineers and scientists, who I professionally respect. It is becoming a common topic, as many people think AI could be an imminent threat to humanity.

Is there anything we could do to prevent AI from being the mankind’s end?

Some researchers point to bionic research as a potential solution to the problem. Could we connect our brain to the AI?

However, would having a physical human body connected to it actually add some value to the AI? We need to be able to provide value to ensure our organic survival.