⭠ Return to thread

My simple mind has one question: considering that humans create AI and humans are flawed, imperfect and changing in nature, how AI is not at risk of being flawed itself?

AI will be self improving, ok on what basis? To my knowledge when we identify that we fall short of an expectation and we have a path to reach the outcome.

The worst historic periods have started on the best intentions of a fairer better society and ended in atrocities for humans.

So goals can be virtuous in theory and create terrible side effects in practice. If AI reaches a point where it masters the understanding of human unstable nature. AI will be very dangerous indeed.

I am a firm believer that progress and creativity can solve our problems. I am not sure all problems ought to be solved though. I have lived on 3 different continents and I am struck by how different we are in terms of values, sense of purpose and happiness, etc... it is naive or dogmatic to think we all want the same things. Therefore AI will naturally serve quicker and better a dictatorship that decides for all humans than serve an infinite amount of different aspirations or views that are moving targets.

Moreover the warfare comment stunned me. And I wonder if I am completely dumb or if this highly intelligent successful man is ... naive.

Wars are about expansion, ressources acquisition, dogmatisme, destruction. AI into looks to me like nuclear weapon on steroids. Sure it will make this shorter by orchestrating mutual destruction faster. And that is great news?!

Last AI panic as he calls it is more about the pace of this change than the change itself. Our brains need to process and we all know that no one processes information at the same pace. When we don’t have time to process new information we back off and fear it. It is a natural human reaction. And please note that we are most likely the fastest adaptable living creature.

So AI is not to me something not to be thoroughly discussed and understood before implementation.

Leaving a few ... men decide what AI is going to based on seems to me a very risky bet for us all.

When AI self corrects how will we be able to change the track if we think it is wrong or realize afterwards that is dangerous ?

Expand full comment