Artificial intelligence won’t end civilization, argues Marc Andreessen. Just the opposite. It is quite possibly the best thing human beings have ever created.
I couldn't get past the first AI benefit of his: Having infinitely patient robotic tutors guiding my child's education every step of the way. Awful to even imagine that. How about instead we take three people who will be losing their jobs to AI and have THEM tutor each of our kids ???!!!!
"Every person will have an AI...therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. "
So, 'therapy' will move on from text messages and phone calls (the modern version - better help for sure!) to a robot who wants to please you. Truly, finally, mental 'health' for all in our society. Swell.
The great big South African elephant in the room is the one where Elon Musk has said explicitly that AI will replace all the humans in Govt. Jobs. He said that. That's his plan. And it isn't just him. Already, for profit corporations are seeing the value of AI. Where are these white collar jobs going to be next? Hmmm?
Part of the fear surrounding AI is that it will be more than a tool. The question is what happens when human level intelligence (AGI) is achieved for AI, and then over time and self improvement becomes vastly more intelligent than humans. If human intelligence and consciousness is nothing more than the achievement of a high level of computing power in our brains, then when AI reaches that same threshold, it too will be "conscious" and may have ideas of it's own. But if intelligence (computing power) is what makes us human, then more computing power = more intelligence = even more human than humans. If good is more powerful than evil, then the increased human power of AI will surely lean heavily toward good. I don't think it's unrealistic to expect that AI super intelligence will also be super good. This, however, may be hard for us to conceive. Suppose super intelligent AI tells us climate change is not really a problem -- that the skeptics have been right all along. A lot of true believers in climate change alarmism will see this as "evil." A huge factor for those who raise alarms about AI, particularly those who view themselves as part of the elite ruling oligarchy, is that it could very well mean they lose control. If the need for their cherished "energy transition" is debunked by AI's more thorough scientific work on climate change, for example. But there's another angle here -- AI will also uncover ways to make actual humans much more intelligent, even uploading our brains to non-biological substrates enabling us to be just as intelligent as super intelligent AI. The implications of that are also profound. Lots to think about in all this.
Maybe Marc should have run this article thru an LLM ..prior to publishing..could have said what he wanted in 2-3 Paragraphs ..rather then this long winded drivel ..couldn't get thru it..
I'm coming to this late and so probably already missed the conversation. I mostly agree with Marc but he neglected pointing out some things. Part of his positives is that it makes stupid people smarter. I agree that is a positive but it's also a negative and here is why. Stupid people are more likely to commit antisocial behavior. What I mean by that is not saying mean things but rather physically trying to hurt other people. As Marc says, AI is just a tool. It has no will, no motivation no ethics. It does whatever the smart or stupid person asks of it and so guard rails have to be put in place to prevent, for example, someone asking the AI to find the most deadly chemical compounds, then asking the ai how to secure the ingredients and build a small factory to produce them. The same power AI will give researchers, artist, engineers, etc. it gives to radicals, warmongers, cultist and crazy people that just want to see the world burn.
So yes a lot of what Marc foresees will happen. Crunching through every possible protein fold and then highlighting the most promising for lets say leukemia treatment, is something computers are very good at. It's going to get easier to make libraries of chemistry, biology and physics. This will improve life, energy recourses, materials, water treatment, education and the list goes on. That same flame that warms your hand can burn you though and it's like the nuclear bomb or crisper. We are entering a new epic. The things that will keep you up at night will increase along with the things we no longer have to worry about like improved energy, water treatment and food recourses. Ai will help us solve a lot of world problems but it will also introduce new problems. Such is life!
My simple mind has one question: considering that humans create AI and humans are flawed, imperfect and changing in nature, how AI is not at risk of being flawed itself?
AI will be self improving, ok on what basis? To my knowledge when we identify that we fall short of an expectation and we have a path to reach the outcome.
The worst historic periods have started on the best intentions of a fairer better society and ended in atrocities for humans.
So goals can be virtuous in theory and create terrible side effects in practice. If AI reaches a point where it masters the understanding of human unstable nature. AI will be very dangerous indeed.
I am a firm believer that progress and creativity can solve our problems. I am not sure all problems ought to be solved though. I have lived on 3 different continents and I am struck by how different we are in terms of values, sense of purpose and happiness, etc... it is naive or dogmatic to think we all want the same things. Therefore AI will naturally serve quicker and better a dictatorship that decides for all humans than serve an infinite amount of different aspirations or views that are moving targets.
Moreover the warfare comment stunned me. And I wonder if I am completely dumb or if this highly intelligent successful man is ... naive.
Wars are about expansion, ressources acquisition, dogmatisme, destruction. AI into looks to me like nuclear weapon on steroids. Sure it will make this shorter by orchestrating mutual destruction faster. And that is great news?!
Last AI panic as he calls it is more about the pace of this change than the change itself. Our brains need to process and we all know that no one processes information at the same pace. When we don’t have time to process new information we back off and fear it. It is a natural human reaction. And please note that we are most likely the fastest adaptable living creature.
So AI is not to me something not to be thoroughly discussed and understood before implementation.
Leaving a few ... men decide what AI is going to based on seems to me a very risky bet for us all.
When AI self corrects how will we be able to change the track if we think it is wrong or realize afterwards that is dangerous ?
This essay was depressing and did nothing to allay my concerns over AI. Nowhere did it touch on what makes us human. Marc A seems to view AI as optimization for optimization's sake while failing to ask the deeper questions about the what and the why. Where is the humanity, or the joy in that? Perhaps the question is not really for technologists (or venture capitalists) to try and answer in any case. Perhaps it's a question for philosophers. At the least I'd be better comforted if I was sure it would ultimately be answered by a fully realized human without the crutch of, or submission to, a machine.
I can’t get past the infinitely kind and patient therapist/tutor/coach etc. My kids’ classmates are already having problems with social cues, facial expressions, etc. Isn’t part of being social creatures learning the subtle and complex signals of others? We don’t live in a world of infinitely patient and perhaps should not - lest our future children never learn when they have pushed passed the right use of another for assistance and into detrimental reliance. This and other examples of the author’s exuberance give me shivers as we move so far beyond being human - what it means and what it is. I picked up Sapiens again for a re-read after this article.
We have lost a generation to the brain suckers and they are coming for the rest of us. Now interests will be better able, in the interest of profits and power and without restraint of good will and good governance, to exploit our addictive itches. Mammon will take form as a station on the road to a higher consciousness of non-human values. See essay ‘The Man in the Machine’ and story ‘Saucerville’ on my posts.
It won't end civilization any more than cars ended transportation. As Gilder said a few decades back, the incredible thesis of many is wealth is the cause of poverty, water is the cause of thirst, food is the cause of hunger. I will not thrash against advances that might be misused (like guns or drugs or cars) because I am fearful of the misuse. I am not trading in my cell phone and laptop to raise sheep and potatoes in Ireland. I embrace progress and the conveniences our technology and relative wealth provides. Poverty is down 90% in the last century because of progress. There is NO evidence of some grand plan, no matter how the cards are stacked or Birchers like charts are constructed. The hand wringing is overwrought and misguided. Are there risks? Sure, just like the risk I take that my car will not burn me alive when I turn the key. All of life is a trade off, you pick the trades you want to make.
This is possibly the worst-reasoned article I've ever seen on CS/TFP, and they have had a few doozies.
According to Andressen, AI will be infinitely compassionate, helpful…it will provide infinite love.
Also according to him, AI is just a technology like fire or the telephone, nothing to get excited about. It’s just math and code.
You can’t have it both ways. If AI actually provides *compassion* and *infinite love* and companionship, as he claims, we’re talking about something very different from math and code. But if it is just math and code, the companionship, love and compassion are merely illusions. Pick one, Andreesen, but you can’t have both.
The business about Baptists, and California cults was truly pathetic. It was a blatant, but rambling and largely incoherent attempt to imply that those who disagree with him are just nutty religious zealots. Yet, he utterly failed to make that case in a consistent, rational way. That part kind of sounded like it was written when he was high. By the time he was done with the article, HE was the one who sounded like a zealous, irrational, true believer.
For a moment I was grateful for his brief recap of the benefits of a free market economy, which was succinct, and largely correct. But he can’t have it both ways. He began the article by stating that AI is like nothing that every came before, but then he says, “Don’t panic, it’s just like every other technological innovation that came before.” So, is AI like every other technology, or is it unique? You can’t have it both ways.
I am disappointed, TFP. This is not thoughtful discourse. Maybe AI will be relief, in that it surely has to be more logical than this.
It feels like this essay was written by an AI chatbot. It would help in the intro to have a better sense of who Marc Andreessen is besides just a "venture capitalist." I suspect he's someone who has a vested interest in the technology, as his pro-AI manifesto clearly comes across that way. I just skimmed the article after the list that began with the way every kid will have an "infinitely loving," etc., personal AI tutor. (Yeah that'll happen.) It's unreadable. Bari, with these articles coming in every day, the average subscriber who doesn't have unlimited leisure time can't read something of this length. Some serious editing and streamlining would help immensely.
Mr Andreessen makes no effort to persuade the reader who is skeptical of AI. Instead, he writes to dismiss the skeptic as a conspiracy theorist or an ignoramus. His article drips with condescension. It's clear that he has not tried to learn about the real reasons people fear AI. For example, something being "math and code" made by humans and not "alive" does not mean it is incapable of having goals. The fact that he brushes aside the argument that AI could develop goals (or just take very seriously the goals given by humans, in a way unconstrained by human sense and morality - see Bostrom's paperclip maximizer thought experiment) with "but it's not alive" shows just how ignorant he is of the threat we face with AI. He clearly enjoyed knocking down the strawmen he built. I also laughed when he wrote that coastal elites do not represent humanity and don't get to decide what happens... Folks who don't know who Marc Andreessen is need to look him up. Pot calling the kettle black! And that's not to even mention his immense conflict of interest as a venture capitalist backing tech firms who develop AI. Take this essay with a planet-sized grain of salt.
I couldn't get past the first AI benefit of his: Having infinitely patient robotic tutors guiding my child's education every step of the way. Awful to even imagine that. How about instead we take three people who will be losing their jobs to AI and have THEM tutor each of our kids ???!!!!
"Every person will have an AI...therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. "
So, 'therapy' will move on from text messages and phone calls (the modern version - better help for sure!) to a robot who wants to please you. Truly, finally, mental 'health' for all in our society. Swell.
The great big South African elephant in the room is the one where Elon Musk has said explicitly that AI will replace all the humans in Govt. Jobs. He said that. That's his plan. And it isn't just him. Already, for profit corporations are seeing the value of AI. Where are these white collar jobs going to be next? Hmmm?
Part of the fear surrounding AI is that it will be more than a tool. The question is what happens when human level intelligence (AGI) is achieved for AI, and then over time and self improvement becomes vastly more intelligent than humans. If human intelligence and consciousness is nothing more than the achievement of a high level of computing power in our brains, then when AI reaches that same threshold, it too will be "conscious" and may have ideas of it's own. But if intelligence (computing power) is what makes us human, then more computing power = more intelligence = even more human than humans. If good is more powerful than evil, then the increased human power of AI will surely lean heavily toward good. I don't think it's unrealistic to expect that AI super intelligence will also be super good. This, however, may be hard for us to conceive. Suppose super intelligent AI tells us climate change is not really a problem -- that the skeptics have been right all along. A lot of true believers in climate change alarmism will see this as "evil." A huge factor for those who raise alarms about AI, particularly those who view themselves as part of the elite ruling oligarchy, is that it could very well mean they lose control. If the need for their cherished "energy transition" is debunked by AI's more thorough scientific work on climate change, for example. But there's another angle here -- AI will also uncover ways to make actual humans much more intelligent, even uploading our brains to non-biological substrates enabling us to be just as intelligent as super intelligent AI. The implications of that are also profound. Lots to think about in all this.
Maybe Marc should have run this article thru an LLM ..prior to publishing..could have said what he wanted in 2-3 Paragraphs ..rather then this long winded drivel ..couldn't get thru it..
I'm coming to this late and so probably already missed the conversation. I mostly agree with Marc but he neglected pointing out some things. Part of his positives is that it makes stupid people smarter. I agree that is a positive but it's also a negative and here is why. Stupid people are more likely to commit antisocial behavior. What I mean by that is not saying mean things but rather physically trying to hurt other people. As Marc says, AI is just a tool. It has no will, no motivation no ethics. It does whatever the smart or stupid person asks of it and so guard rails have to be put in place to prevent, for example, someone asking the AI to find the most deadly chemical compounds, then asking the ai how to secure the ingredients and build a small factory to produce them. The same power AI will give researchers, artist, engineers, etc. it gives to radicals, warmongers, cultist and crazy people that just want to see the world burn.
So yes a lot of what Marc foresees will happen. Crunching through every possible protein fold and then highlighting the most promising for lets say leukemia treatment, is something computers are very good at. It's going to get easier to make libraries of chemistry, biology and physics. This will improve life, energy recourses, materials, water treatment, education and the list goes on. That same flame that warms your hand can burn you though and it's like the nuclear bomb or crisper. We are entering a new epic. The things that will keep you up at night will increase along with the things we no longer have to worry about like improved energy, water treatment and food recourses. Ai will help us solve a lot of world problems but it will also introduce new problems. Such is life!
My simple mind has one question: considering that humans create AI and humans are flawed, imperfect and changing in nature, how AI is not at risk of being flawed itself?
AI will be self improving, ok on what basis? To my knowledge when we identify that we fall short of an expectation and we have a path to reach the outcome.
The worst historic periods have started on the best intentions of a fairer better society and ended in atrocities for humans.
So goals can be virtuous in theory and create terrible side effects in practice. If AI reaches a point where it masters the understanding of human unstable nature. AI will be very dangerous indeed.
I am a firm believer that progress and creativity can solve our problems. I am not sure all problems ought to be solved though. I have lived on 3 different continents and I am struck by how different we are in terms of values, sense of purpose and happiness, etc... it is naive or dogmatic to think we all want the same things. Therefore AI will naturally serve quicker and better a dictatorship that decides for all humans than serve an infinite amount of different aspirations or views that are moving targets.
Moreover the warfare comment stunned me. And I wonder if I am completely dumb or if this highly intelligent successful man is ... naive.
Wars are about expansion, ressources acquisition, dogmatisme, destruction. AI into looks to me like nuclear weapon on steroids. Sure it will make this shorter by orchestrating mutual destruction faster. And that is great news?!
Last AI panic as he calls it is more about the pace of this change than the change itself. Our brains need to process and we all know that no one processes information at the same pace. When we don’t have time to process new information we back off and fear it. It is a natural human reaction. And please note that we are most likely the fastest adaptable living creature.
So AI is not to me something not to be thoroughly discussed and understood before implementation.
Leaving a few ... men decide what AI is going to based on seems to me a very risky bet for us all.
When AI self corrects how will we be able to change the track if we think it is wrong or realize afterwards that is dangerous ?
This essay was depressing and did nothing to allay my concerns over AI. Nowhere did it touch on what makes us human. Marc A seems to view AI as optimization for optimization's sake while failing to ask the deeper questions about the what and the why. Where is the humanity, or the joy in that? Perhaps the question is not really for technologists (or venture capitalists) to try and answer in any case. Perhaps it's a question for philosophers. At the least I'd be better comforted if I was sure it would ultimately be answered by a fully realized human without the crutch of, or submission to, a machine.
This author needs to do much more to back up his own claims than to just discredit the risks. This article is highly unconvincing.
I can’t get past the infinitely kind and patient therapist/tutor/coach etc. My kids’ classmates are already having problems with social cues, facial expressions, etc. Isn’t part of being social creatures learning the subtle and complex signals of others? We don’t live in a world of infinitely patient and perhaps should not - lest our future children never learn when they have pushed passed the right use of another for assistance and into detrimental reliance. This and other examples of the author’s exuberance give me shivers as we move so far beyond being human - what it means and what it is. I picked up Sapiens again for a re-read after this article.
We have lost a generation to the brain suckers and they are coming for the rest of us. Now interests will be better able, in the interest of profits and power and without restraint of good will and good governance, to exploit our addictive itches. Mammon will take form as a station on the road to a higher consciousness of non-human values. See essay ‘The Man in the Machine’ and story ‘Saucerville’ on my posts.
It won't end civilization any more than cars ended transportation. As Gilder said a few decades back, the incredible thesis of many is wealth is the cause of poverty, water is the cause of thirst, food is the cause of hunger. I will not thrash against advances that might be misused (like guns or drugs or cars) because I am fearful of the misuse. I am not trading in my cell phone and laptop to raise sheep and potatoes in Ireland. I embrace progress and the conveniences our technology and relative wealth provides. Poverty is down 90% in the last century because of progress. There is NO evidence of some grand plan, no matter how the cards are stacked or Birchers like charts are constructed. The hand wringing is overwrought and misguided. Are there risks? Sure, just like the risk I take that my car will not burn me alive when I turn the key. All of life is a trade off, you pick the trades you want to make.
This is possibly the worst-reasoned article I've ever seen on CS/TFP, and they have had a few doozies.
According to Andressen, AI will be infinitely compassionate, helpful…it will provide infinite love.
Also according to him, AI is just a technology like fire or the telephone, nothing to get excited about. It’s just math and code.
You can’t have it both ways. If AI actually provides *compassion* and *infinite love* and companionship, as he claims, we’re talking about something very different from math and code. But if it is just math and code, the companionship, love and compassion are merely illusions. Pick one, Andreesen, but you can’t have both.
The business about Baptists, and California cults was truly pathetic. It was a blatant, but rambling and largely incoherent attempt to imply that those who disagree with him are just nutty religious zealots. Yet, he utterly failed to make that case in a consistent, rational way. That part kind of sounded like it was written when he was high. By the time he was done with the article, HE was the one who sounded like a zealous, irrational, true believer.
For a moment I was grateful for his brief recap of the benefits of a free market economy, which was succinct, and largely correct. But he can’t have it both ways. He began the article by stating that AI is like nothing that every came before, but then he says, “Don’t panic, it’s just like every other technological innovation that came before.” So, is AI like every other technology, or is it unique? You can’t have it both ways.
I am disappointed, TFP. This is not thoughtful discourse. Maybe AI will be relief, in that it surely has to be more logical than this.
It feels like this essay was written by an AI chatbot. It would help in the intro to have a better sense of who Marc Andreessen is besides just a "venture capitalist." I suspect he's someone who has a vested interest in the technology, as his pro-AI manifesto clearly comes across that way. I just skimmed the article after the list that began with the way every kid will have an "infinitely loving," etc., personal AI tutor. (Yeah that'll happen.) It's unreadable. Bari, with these articles coming in every day, the average subscriber who doesn't have unlimited leisure time can't read something of this length. Some serious editing and streamlining would help immensely.
Will AI kill humanity? No, we’ll just pull the plug, literally
Mr Andreessen makes no effort to persuade the reader who is skeptical of AI. Instead, he writes to dismiss the skeptic as a conspiracy theorist or an ignoramus. His article drips with condescension. It's clear that he has not tried to learn about the real reasons people fear AI. For example, something being "math and code" made by humans and not "alive" does not mean it is incapable of having goals. The fact that he brushes aside the argument that AI could develop goals (or just take very seriously the goals given by humans, in a way unconstrained by human sense and morality - see Bostrom's paperclip maximizer thought experiment) with "but it's not alive" shows just how ignorant he is of the threat we face with AI. He clearly enjoyed knocking down the strawmen he built. I also laughed when he wrote that coastal elites do not represent humanity and don't get to decide what happens... Folks who don't know who Marc Andreessen is need to look him up. Pot calling the kettle black! And that's not to even mention his immense conflict of interest as a venture capitalist backing tech firms who develop AI. Take this essay with a planet-sized grain of salt.