umvi
5 years ago
0
54
Dang, 90% chance we'll have AGI* in 50 years? I would bet money against that prediction if I could. Any bookies out there? There's just no way AGI can replace blue collar work like plumbers and electricians, etc. that quickly. Programmers, also unlikely. Sure, you can have AIs generate more code for you, but then you'll just have the programmers working one abstraction layer up from that. Also, AI-generated entertainment always has and always will suck. I will eat my hat if an AI in 50 years can generate a full length, original hollywood-esque movie that I enjoy. Heck, I'll even settle for a book that can rival human authors.

Sure, I could see a lot of medical professions and other "knowledge bank" type jobs being replaced. I've always thought optometrists could largely be replaced with "measure my prescription" booths controlled by a computer. But anything requiring any creative juice whatsoever will likely not be replaced.

*Yes, it's not true AGI but AI that replaces 95% of all jobs

melling5 years ago
In 1900 the earth’s population was 1.6 billion. In 1903 the Wright Brothers built the first airplane. In 1947, humans flew at supersonic speed for the first time. That was in the span of 50 years. 22 years later, we were on the moon.

A lot can happen in 50-70 years.

We now have 7 billion people, with more of the world coming online to do R&D. China, specifically, has made it a goal to lead the world in AI by 2025.

India’s economy should grow over the next 2 decades and they will also become a world leader.

With so many resources, the world should easily advance more in the next 50 than it did in the last 100 years.

300bpsmelling5 years ago
When I was 14, I was 6'5". Extrapolating that out, I should probably be 100' tall by now.
paulcole300bps5 years ago
brb writing a pointless think piece about why I am 90% sure that will happen by 2070
nitwit005melling5 years ago
In the 50s and 60s aviation and space people often thought that things would keep improving exponentially, as they'd lived through such incredible advancements. Instead things slowed enormously, despite vast efforts put in.

We've lived through enormous computing advances, but it's been fairly obvious for some time that hardware improvements are slowing.

I'm sure there will be amazing advancements in the next 50 years, but I expect a lot of the progress to be in fields that are either currently unknown, or seem unimportant today. Those new fields will see better return on investment.

apinitwit0055 years ago
> despite vast efforts put in.

I disagree with that part. Things slowed dramatically because we stopped putting extreme amounts of effort in, largely because there wasn't enough market demand for aviation or space flight beyond 1970s technology... for a while.

nradovapi5 years ago
The worldwide market for aviation and space launches is far larger now than in the 1970s.
fiblyenitwit0055 years ago
Funding for NASA also dropped off a cliff and no government really cared about going to space anymore, and there was zero company out there with the funds and motivation to go to space.

Now we have countless companies and several countries all competing and building off each other's work in the AI space. So long as people don't get bored or the global economy doesn't collapse, progress should keep chugging along. Another key difference is that it's basically free to get into. Anyone out there can download data sets, existing algorithms, and get to work on tweaking things. There's a strong foundation for any motivated person to build off of.

6nfnitwit0055 years ago
We are putting lots of mass into space, and it's cheaper than ever. I think if you plot mass launched per year you'll see a different picture than what you are painting.
Thorentismelling5 years ago
Putting AGI in the same field as population growth and flight is disingenuous. They are entirely different things, and the rapid advancement of one means absolutely nothing about the advancement or even possibility of the other.

Overcoming physical limitations is one thing. An intelligent being creating something equally or more as intelligent as itself? And obviously I don't mean reproducing. Creating an entirely new class of thing which has intelligence equal to the creator is very different to using the forces of nature to give you an edge over gravity.

mellingThorentis5 years ago
The point with the population is that we will have 7-9 billion people, with more people doing research, engineering, etc. how many scientists were there in 1900?

We will figure out how the brain works, make better transistors, develop better algorithms, etc

An AI “space race” between the US and China, for instance, will push the field forward over the next decade.

TaylorAlexandermelling5 years ago
Collaboration also pushes this field forward. Lots of great AI research coming from China and the US and luckily lots of it is shared openly. IMO that’s how to maximize the rate of innovation.
sheepdestroyermelling5 years ago
Did you miss the news where average IQ is lowering across the board due to various poorly understood factors (increase of CO2 levels, endocrine disruptors, etc...). We are getting at a massive amount of dumb people ala idiocracy, not sure if that also means more smart scientists in total.
lostmsusheepdestroyer5 years ago
If there's a drop in IQ, right now it is not even comparable to the population rise.
thoughtstheseusThorentis5 years ago
Comparing historical projections for a variety of topics seems like a decent point of reference. Population growth and flight seem vastly more tractable problems, in hindsight if not at the time then AGI does, yet estimates were way off by many.
computerphagethoughtstheseus5 years ago
> Indeed, eight years before Orville and Wilbur Wright took their home-built flyer to the sandy dunes of Kitty Hawk, cranked up the engine, and took off into the history books, Lord Kelvin, the President of the Royal Society of England made a forceful declaration. "Heavier than air flying machines are impossible," said this very powerful man of science....Rumor has it Lord Kelvin was slightly in error.

https://www.nasa.gov/audience/formedia/speeches/fg_kitty_hawk_12.17.03.html

mempkomelling5 years ago
You forgot about global warming and that we are in the sixth mass extinction. Global civilization collapsing in 50 years seems as likely as AGI.
loosetypesmelling5 years ago
Improving performance for a metric that we actually know how to quantify is fundamentally different than calling interesting but potentially one-off stabs as incremental progress.

Conflating them only demonstrates how far we have to go.

unisharkloosetypes5 years ago
We do know how to quantify intelligence (unless you're one of those who includes consciousness in the definition of AGI, which seems almost religious to me). I expect it will quickly switch to an argument of what deficiencies AGI candidates have and how smart they really are, much like ML models that supposedly beat humans at narrow tasks today, rather than possible versus impossible talk.

As for impossible talk, we have biological examples all around us of what needs to be built. We just need to imitate. Much like computer vision, algorithms sucked at it until they didn't (and all it took was someone scaling up an old design idea plus a lot of data). On the scale of gigantic ambitious goals it's pretty special in that regard. Curing cancer or death or mars colonies may indeed be impossible, by contrast.

I will agree that I trust no one's ability to predict anything. They are all just making almost-entirely-uneducated guess using a few variables out of some vast number of unknowns.

visargaunishark5 years ago
> unless you're one of those who includes consciousness in the definition of AGI

Why stick with this concept - consciousness - which is not well defined, instead of using a much more practical concept: embodiment. Embodiment is the missing link towards human level AI. Agents embodied in a world, like AlphaGO, already surpass humans (on the Go board or Dota 2), we just need to take that ability to real world. The source of meaning is in the game, not the brain. What we need is a better simulator of the world, or a neural technique for imagination in RL, which is under works [1].

1. https://arxiv.org/pdf/2005.05960.pdf

loosetypesunishark5 years ago
Thanks for the reply.

> We do know how to quantify intelligence (..).

But how exactly do we do that?

littlestymaarmelling5 years ago
> In 1903 the Wright Brothers built the first airplane. In 1947, humans flew at supersonic speed for the first time.

And since then, not much has changed. Commercial supersonic flight never took off, and nowadays planes still use turbofans (invented during WWII). Engineering fields commonly make many breakthrough in a really short time, and then settle down for a long period. We can't predict how far IA progress will go. In the 50s, having flying cars by 2000 didn't sound unrealistic given how much flight advanced during the first half of the century. Yet, I don't think anyone nowadays believes we'll have them by 2100.

Also, between Da Vinci's Codex of the flight of birds (1502) and the Wright brothers flight, there has been four centuries. And regarding AGI, we might be closer to Da Vinci then to the Wright.

zxwxmelling5 years ago
In 70 years of physics we went from the photoelectric effect to the Standard Model. But for the last 50ish, it remains the standard.

70 years from the first flight to the concorde and the saturn 5. But in the 50 years since, improvements in aerospace have been incremental.

In 75 years we went from ENIAC to TFLOPS in a laptop. But looks like that breakneck pace is slowing down sharply. We've been doing AI nearly as long, and have gone from say, Eliza to GPT-3. A huge advance, but not AGI.

A lot can happen in 50 years, but we've already had our first 70ish years with AI without an AGI breakthrough.

To the definition of AGI in the link, maybe a hundred million data scientists can hone a million models, one per "economically viable" task, and start chipping away at the 95% of the economy target, but till now I'd wager AI has put many more people to work than out of it.

dwaltripzxwx5 years ago
You have a good point. However, to be a bit pedantic, fully reusable rockets, which we are very much knocking on the door of, are a major stepwise improvement.

It just goes to show that technological advancement can happen rather unpredictably.

kajecounterhack5 years ago
> There's just no way AGI can replace blue collar work like plumbers and electricians, etc. that quickly.

OP specifically called out AGI as not requiring touch or taste, only text to beat the turing test.

> Programmers, also unlikely...Sure, you can have AIs generate more code for you, but then you'll just have the programmers working one abstraction layer up from that.

At what point do you stop calling them programmers and start calling them system architects? If I'm a programmer and my whole job can be replaced, isn't that replacing _some_ programmers? I think it's fair to argue that some programming jobs would be straight up gone. Maybe most of them.

> Sure, I could see a lot of medical professions and other "knowledge bank" type jobs being replaced. I've always thought optometrists could largely be replaced with "measure my prescription" booths controlled by a computer. But anything requiring any creative juice whatsoever will likely not be replaced.

We're not talking about what today's AI can do -- today's AI sure as hell can replace knowledge banks and some medical tasks like radiology and optometry, and yeah it can't quite make blockbuster movies. But generative AI has come a long way and there's reasons to be optimistic again. Alex cites GPT-3 and iGPT as evidence of this trajectory.

He also says "imagine 2 orders of magnitude bigger" -- models with 1.75e13 params. What emergent generative powers might we discover? Synthesizing a blockbuster movie no longer seems entirely out of reach, even if we have to move another magnitude bigger and make several more algorithmic breakthroughs.

Tehdasikajecounterhack5 years ago
> OP specifically called out AGI as not requiring touch or taste, only text to beat the turing test.

Erm, only text in the turing test? That's a pretty non-general form of artificial general intelligence.

goatloverkajecounterhack5 years ago
> OP specifically called out AGI as not requiring touch or taste, only text to beat the turing test.

But a text-based AGI is not replacing plumbers and electricians, which means it's only general in limited areas, like generating human-level text. It would be impressive and not doubt have plenty of uses, but it's not a threat to paper clip the world or put everyone out of a job.

addled5 years ago
I think if we get to AGI full-length movie directors, they won't care if you like it or not. Humans will no longer be the target audience.
esrauchaddled5 years ago
Can you explain further? It seems very feasible to have AI generated media for humans as the target audience to me.
randomdataesrauch5 years ago
It seems feasible for us humans to make media directed for dolphins, but why?
stickfigurerandomdata5 years ago
We make bird baths and dog parks and cat trees and hummingbird feeders and hamster wheels... I'm sure if dolphins expressed an obvious love of cinema, somebody would make movies for them.
randomdatastickfigure5 years ago
> We make bird baths and dog parks and cat trees and hummingbird feeders and hamster wheels...

For the entertainment of the creator. Maybe you are right that AGI will be entertained by watching humans turn into vegetables as they aimlessly watch a screen for hours at a time, but I suspect not.

stickfigurerandomdata5 years ago
I think you're oversimplifying human motives here. A lot of humans genuinely care about the well-being of pets and wildlife, even when they're not in sight. Look how much effort was put into reviving the California Condor.

Maybe let's hope that a hypothetical AGI finds us "cute".

randomdatastickfigure5 years ago
> Look how much effort was put into reviving the California Condor.

AGI might have reason to keep us alive, sure, but why create media for us? Given our current trajectory, AGI will be very energy hungry. How will it justify using that energy for the sake of human entertainment?

esrauchrandomdata5 years ago
Why do humans make media directed at other humans? There's some intrinsic and extrinsic motivation that makes it so, and those conditions aren't met by making media for dolphins.

I understand the hypothetical dangers of an AGI with the "wrong" reward function where "wrong" includes "like humans in terms of species-tribalism and intelligence-smugness", but I don't actually see a media generator AI necessarily having human-esque identity that you're suggesting.

goatloveraddled5 years ago
What would be the economic incentive if humans are no longer the target? Are the AGIs going to become consumers? An even more fundamental question would be, let's say that does happen. Why would humans want that outcome? And if that's were AGI is headed in general, then isn't that good reason for revolution by the humans?
JoshuaDavid5 years ago
The problem with betting against you here is the question of "what fraction of the possible worlds in which AGI exists are worlds where I am alive to care about collecting from you, and care about money, as compared to those worlds where AGI does not exist in 50 years".

That said, I would take the flip side of this bet to be settled at the end of the 50 years, for at least 90% chance of AIs able to create hollywood-esque movies within the next 50 years, though possibly not for AI plumbers in that time frame. For that matter, I would put at least 10% on a movie with an AI-generated script having global box office numbers topping $1B by the end of this decade.

onionisafruitJoshuaDavid5 years ago
On the other hand, a world with rapid tech innovation is the only world where I'm alive in 50 years. So I'm probably best off betting on AGI existing 50 years from now. I think laws are such that my heirs don't have to make good on my wagers.
JoshuaDavidonionisafruit5 years ago
Laws are indeed such. It would be kind of cool if long-term bets were possible to do, but the value of winning or losing bets far in the future is so strongly impacted by your personal discount rate and likelihood of living to see the outcome that making any gain off of those kind of bets is impractical pretty much no matter how sure you are.
Cyphaseonionisafruit5 years ago
Not your heirs per se, but your estate would have an obligation to make good.
nradov5 years ago
An autorefractor can already figure out a patient's eyeglass prescription without an optometrist. It doesn't use any AI, just high precision optics with deterministic calculations.

https://en.wikipedia.org/wiki/Autorefractor

However optometrists do a lot more than just write prescriptions for corrective lenses.

umvinradov5 years ago
They can do more than that, yes, but the vast vast majority of glasses users just need a prescription refresh every 5 years.
bigyikesnradov5 years ago
I’ve used one of those devices. It takes 30 seconds to learn how to operate and is very accurate. I really don’t understand how optometry exists in it’s current state.
ankobigyikes5 years ago
They aren't that accurate and actual promote bad prescriptions - they tend to suggest higher myopia and astigmatism.

https://pubmed.ncbi.nlm.nih.gov/16815252/

https://en.wikipedia.org/wiki/Autorefractor

bigyikesanko5 years ago
Well, I tried it along with ~10 other people who already knew their prescription and it was self-reportedly accurate every time.

It would be nice if this was at least an option for people who might not have vision insurance or for whatever reason don't want to proceed through the traditional system

Cyphasenradov5 years ago
For a moment I read that as autorefactorer.
bigiain5 years ago
> I will eat my hat if an AI in 50 years can generate a full length, original hollywood-esque movie that I enjoy. Heck, I'll even settle for a book that can rival human authors.

and

> Yes, it's not true AGI but AI that replaces 95% of all jobs

To be fair, being able to create an enjoyable full length hollywood-esque movie or becoming an author that can write a (creative and entertaining) book is something well under 5% of humans are capable of doing.

Perhaps you're setting the bar for AGI too high there? Does it really need to excel and exceed the capabilities of the very best of human attempts at movie making and book writing to be considered "AGI", when the vast majority of humanity can not do that?

(Also, I suspect 95% of jobs probably actively discourage "creative juice" being used. And nobody really wants to found their startup with someone who's "just an ideas guy!", if all he's ever contributing is "creative juice".)

hetman5 years ago
We've been waiting for "knowledge bank" type of jobs to be replaced by AI any year now since the expert systems of the 1980's. But I feel like this kind of thinking reveals an ignorance of these professions. Optometrists do a lot more than just "measuring your prescription", they also deal with eye health and the peculiarities and variations that come with biological systems. But even when it comes to your prescription, what an optometrist does is deal with the subjective nature of your experience and that's something that's difficult for an AI to do today (or probably for a good while yet).

I think there's something very insightful in your post though, and that is the observation that programmers will just work one abstraction layer up. In general, it has been demonstrated that a combination of expert + AI is far more effective than either one on their own. I can see AI becoming an indispensable tool in the tool belt of an expert, and since we want the best possible outcomes, we're not going to throw the expert out of that equation any time soon. What we may see is the need for fewer experts to get the job done, as the automation capabilities of AI allow them to become more efficient. Just like the power loom, it certainly reduced the number of humans needed, but even today, you still need some people to service the machines and to program their patterns.

lopmotr5 years ago
I think a single human can envision an entire movie and all the other human labor is just low-creativity grunt work suited for AI.
OrderlyTiamat5 years ago
I agree, full AGI for 90% likelihood in 50 years is optimistic. However, a book rivalling human authors? I'd definitely assign higher than 90% likelihood for that. Gpt-x might not cut it, but there's sure to be a lot of work in that direction, and 50 years is a long time.

I'd like to take that bet. I'd even speed up the timeline a bit, especially when it comes to the book.

Let's say, if there's no completely AI generated (so no human editing) book on the new york times best seller list #1 for at least a duration of 2 weeks, before 2040, I'll be very surprised.

jungofthewon5 years ago
there are people on this thread that are making bets on these timelines so you totally could.

https://www.lesswrong.com/posts/hQysqfSEzciRazx8k/forecasting-thread-ai-timelines

dmch-15 years ago
This types of predictions disregard human factors. Many occupations are inherently human. Digital audio is no worse than human voice, however people still go to live concerts. Computers play chess better than humans, but there are still human chess players. Besides entertainment and sports machines can never function as politicians or business people. It is also unlikely that many service jobs involving human to human interaction could be replaced by machines. 95% is just too off.
alan-crowe5 years ago
There are a lot of books out there to use as training data. Fifty years is plenty of time to perfect the plagiaristic mashup. I expect autowriters to dominate the market for "boy meets girl, boy loses girl, boy regains girl." stories that are set in the past. There might be a job, half novelist, half computer programmer, done by humans who tweak autowriters to add up-to-the-minute details to their stories.

I doubt that "I'll even settle for a book that can rival human authors." states a sharp edged criterion that separates before from after. Crossing the gap between rivaling Barbara Cartland and rivaling Tolstoy mind take a century of software development.

majewsky5 years ago
> 90% chance we'll have AGI* in 50 years? I would bet money against that prediction if I could.

You cannot make a bet on a statement of probability. It's unfalsifiable (unless, within the next 50 years, someone finds a way to take a random sample of the multiverse).

Engineering-MDmajewsky5 years ago
Well you just bet against an Odds which you perceive would net you a positive outcome surely? That’s all that you do with any uncertain gambling