471 Comments

There is no Artificial Intelligence, yet. What Chat GPT presents is augmented intelligence. The question is not whether AI will take over sooner or later. Rather, would you trust the developers who seem to have an allegiance to truth, fairness and presenting the "sides" of an issue the way most people see them.

Would you really trust bureaucrats who kowtow to politicians, or politicians whose major allegiance is to getting re-elected.

The question is what bad actors are you willing to put in charge of augmented intelligence. Certainly we are far more worried about the possibility of an evil machine taking over the world. We should be worried about an evil human representing the machine as God.

Expand full comment

This writer must be non human. This was my thought halfway through the article, as I see that is others thoughts as well. I hope BW got the answers she was looking for.

Expand full comment

Very insightful piece. The proverbial dice have already been tossed. However the world still has ability to influence how these land.

Expand full comment

Here's how an AI kills everyone.

Remember an AI is smarter than us. A lot smarter than us. So much smarter than us that we can't really tell how smart it is, because it might not reveal its true intelligence to us.

AI designs a virus that will kill every human being instantly. Maybe it does this by having the virus produce, on a signal or a timer, produce nanomachines that convert elements common in the human bloodstream into a deadly poison.

AI compromises a team doing research on cancer. It disguises its new virus as a useful, at worse harmless, possible cancer treatment. It is smarter than us, remember? It uses its intelligence to manipulate the cancer research team into producing the virus. Not by talking to them, but making the virus appear as any other bit of possibly helpful knowledge divulged by doing the hard work of conventional research.

Expand full comment

One of the great things already happening is the ability to do quicker more accurate searches. I say more accurate and not absolute accuracy because the nature of the machines we’re building and calling AI. They’re based on statistics. Bayesian modeling, decision trees, random tree, ML and LLM. What all these have in common is that they will always miss the edge cases. We have theory of chaos but we do not yet have a mathematical iron clad means of predicting edge case events, black swans, ahead of time.

For that reason AI, which is really LLM/ML/Statistics will make easy decisions even more automatable. The flying the plain though will always be desirable to have a pilot and copilot because the edge case of temperature, crosswinds and unpredicted migratory geese flock will require a pilot and an off switch as we have now. Areas that will keep improving range the gambit from creating libraries of protein folds, chemical interactions, predicting new quantum particles and revolutionizing agriculture, building, 3D fabrication, etc.. The biggest benefit I see to LLM which we are just at the forefront of, is the ability to predict and create libraries of physical properties. In the past a researcher would have to experiment physically with every possible scenario in a lab which would be prone to third variable influences that always make lab work messy. LLM will allow researchers to build a library in advance and then set about proving the peace’s that the model have predicted that aren’t already settled science.

The creation of massive libraries that are easily searchable will speed up our ability to create new materials, energy sources and manufacturing methods. It will change society from top to bottom. LLM will become an aid for novices to automate there work and learn the basics of coding which will no longer require memorizing syntax and wording for routines and now be far more focused on structure and relational thinking. It will change what we teach, how we teach and how people learn.

These changes will be in conflict with political/social constructs within society. Universities and bureaucracies will still use a degree as a filtering mechanism for job placement. Corporate suits will still see size of a team as a prestige symbol. The layers of bullshit will surely intensify as they always do and the software will allow people to be more lazy. Laziness will cause skills and critical thinking to atrophy and so we’ll here more credentialed idiots sounding the alarms. We’ll have bigger bubbles, more bailouts and more involvement of government in economics and industrial policy. More bailouts means a heavier strain on the currency. It will cause inflation while the underlying technology will hopefully be disinflationary. It will be the best of times and the worst of times. As it always has been.

Expand full comment

I'm not really impressed. AI will present issue with what we already know but forgot. At this point in AI it can only regurgitate what is known. It does not create and that is its weakness. Will that change? Perhaps but not in my lifetime (which is coming to an end in some years). AI is fascinating but its real danger is in being used to lie to people, manufacture false reality. But that's not new. Trump and FOX News did it with "sticks and bones". Spoken words that were simply not true and perhaps as many as 3 million people bought into the lies. It will depend on just how lazy we are when it comes to delving into the facts to determine "truthiness".

Expand full comment

Actually the models are currently used to fill in the gaps and find relationships between things. So it can take several different theories, proven facts and ideas and then spit out the highest probability thing to fill the holes. That's it's greatest strength. To be able to build a model and use statistics to fill in what is missing.

Expand full comment

So a two minute review of the 'Emergent Ventures' website reveals three immediate concerns; Cowen is the Chairman of The Board, Charles Koch is a Board Member Emeritus, and Daniel Rothschild is the Executive Director. You can't make this up folks.

Expand full comment

This hits on much of what I've been reading and writing. It isn't the tech, it's how we'll react to the tech. So many people talking of burning it down already instead of looking to what it could do.

Yes, we need to be cognizant of it. But, like everything else, we progress. I also think we are a long way from AGI...except that we love to project and anthromoporphize things as having intellgience, emtotions, and intent that it doesn't have. The movie Bambi is a great example.

https://polymathicbeing.substack.com/p/the-layers-of-ai

Expand full comment
May 7, 2023·edited May 7, 2023

I respect Tyler Cowen, but this article is a lot of word salad. He could have simply said, “refrain from extremes and seek to find the broadest value from the evolution.” There: I just saved him 2000 words. It would be more instructive—given his experience and his comments on Honestly—to explain HOW people can practically learn and and adapt to this tech. Additionally: the advent of internet and mobile phones is transformative technology. So is space exploration and related tech with satellites and literal rocket science. Bari: I think it’s ok to have Tyler resubmit his paper for extra credit. :-)

Expand full comment

In life, the only stable thing is movement, everywhere and always" Jean TINGUELY (1925-1991)

Expand full comment
May 6, 2023·edited May 6, 2023

As someone who lives in Silicon Valley and knows many people who are actively using or investigating these new AI platforms, we are overlooking the impact they will make on a lot of professionals. I grew up in Ohio in the 70s and 80s and witnessed the impact of automation on the automotive sector. The industrial workforce never recovered from the job losses.

Many, many people in the professional sector now work on rote tasks; data entry, technical writing, bookkeeping, etc. These jobs aren't creative and do not need to achieve a high level of excellence to provide value. Therefore they can and will be replaced by ChatGPT-like systems that can do a "good enough" job at these tasks. Corporate boards and private equity are looking at Musk's headcount trimming at Twitter and these new tools, and are planning to sunset many repetitive tasks done by humans.

I would advise anyone that thinks their job is boring and is a "bullshit job", to pivot to tasks and roles that are more imaginative and more strategic. The articles on LinkedIn that AI will make your job easier and I've you a 4 day work week are complete horseshit. These tools will give CEOs a way to get rid of you. If your job is to be a robot, they will replace you with one.

On the positive side, I do believe that future iterations of AI will unleash incredible discoveries that humans assisted by traditional computation haven't been able to achieve. Fusion energy and cancer vaccines will come because of the capabilities of AI. I wonder if the future will be a few very rich people who live forever, and a lot of angry poor people.

Expand full comment

Do you really want to trust an LLM and the IRS with your taxes? I don't! Definitely a lot of menial jobs will be automatable but that is already the case. Chaos and unpredictable events will always create job security for people that understand where the value in humans is. If we have to file our P&L to avoid being fined millions of dollars and the power cuts out at the office the LLM isn't going run home to finish drafting and publishing it. Sure you might say that is what the cloud and back up systems are for. Those cost money and expose your system to security risks. Their is always an argument and a counter argument such that the future will more likely follow competition of the fittest rather than competition of the most intelligent/accurate/true/logic.

Expand full comment

You bring up great points. That said, with distributed Web3 networks and AI there will be less friction preventing clerical work from being outsourced to very cheap automated systems.

Expand full comment

If only we could automate away politicians we'd have a utopia ;^D.

Expand full comment

Disappointing at best, childish at worst. Makes me even wonder if this wasn't written by Bard. "Radical agnosticism" is your best recommendation? It is precisely because we now understand the unintended consequences of domesticating fire, developing the atomic bomb and deploying social media apps that we should invoke PRUDENCE when attempting monumental technological leaps such as AI. I for one can't recall any other inventor in history who openly admitted their creation could destroy humanity, but hey, let's move forward anyway. And yet here we are - it's as if the author had never listened to an interview with Sam Altman or Sundar Pichai. I also find it astonishing that no one so far has drawn the most obvious parallel after three years of COVID, namely the catastrophic dangers of "playing with fire" in a lab and letting a virus loose into the world. I expect better from The Free Press.

Expand full comment

The Terminator

Expand full comment
founding

This was insightful

Expand full comment

'Back to the land' is a vacuous pipe dream fantasy entertained by people who have never worked on a farm in their lives. Some of us use the internet to glean information on a certain subject, too many others use it to confirm what they've already decided they believe or with an absence of meaningful critical thinking. I would say what I'm advocating is navigating the endless stream of online content while being firmly rooted in the messy, often chaotic real world and not being so willing to give up our privacy and personal autonomy for the mindless pursuit of lazy convenience. Social media is inherently addictive to even the most intelligent people (witness Jordan Peterson, Kathleen Stock), that's the design. Is any of this going to change? Not in a good way

Expand full comment

This has nothing to do with this article.....

But....

Everyone is claiming AI is something to be feared.

Man is the author of AI.

Therefore this fear is an indictment of man's inability to keep themselves safe.....from man.

Yup.

Man is the one feared here...not AI.

AI is great and will be greater. But not if developed by a maniac.

Cars are great...but not if driven by a maniac.

etc..........

Expand full comment