Paul Christiano - Preventing an AI Takeover

Paul Christiano - Preventing an AI Takeover

Dwarkesh Patel

8 месяцев назад

66,285 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

Paul Michael Freedman
Paul Michael Freedman - 07.11.2023 15:54

Simple. Don't want Ai to take over? Three letters. EMP.

Ответить
Digital Nomad On FIRE
Digital Nomad On FIRE - 07.11.2023 12:44

There's only war because every country isn't a democracy. Democracies almost never go to war with each other.
This is obvious yet almost nobody talks about this anymore.

Ответить
Michael Leyzorek
Michael Leyzorek - 07.11.2023 04:10

I CANNOT TOLERATE this man's opinions. they're SO SHORT SIGHTED!!!

When he speaks about regulating the use of AI agents and give an example of a persuasion campaign, the question you FAIL TO ASK IS WHY someone might want to run a persuasion campaign in a world without material suffering or scarcity, and a world without the representation of money as we understand it today???

It seems absurd to see this man contort his cranium and yet still fail to understand his own words. He speaks of a world without material scarcity or money, and yet fails to think even a little about how the social impacts are rammified. WHAT DO PEOPLE DO WITH THEMSELVES WHEN THEY HAVE UBI>??>? when you dont NEED to work, what do you do? and when most of the jobs that suck are taken by AI, WHAT DO YOU DO?? WHEN THERE ARE NO "BAD JOBS" anymore, and you are free to find osmething that expresses yourself.

WHEN EDUCATION IS FREE AND UNLIMITED BY AI, KNOW ANYTHIGN YOU WANT. WEHN THE EDUCATION SYSTEM PREPARES STUDENTS FOR THE WORLD, NOT FOR THE WORKFORCE. THEN WHAT WILL PEOPLE DO?

WHY WOULD THEY FEEL THE NEED TO OTHERS? WHY THE FUCK DO WE NEED AI FIGHTION WARS FOR US? WHY ARE WARS FAOUGHT?

WHY ARE WARS FAUGHT? __ MATERIAL ____ SCARCITY ____ POLLITICAL INFLUENCE FOR _____ MATERIAL _____ SCARCITY_______

SO IN A WORLD WITHOUT MATERIAL SCARCITY, WHY WOULD HUMANS NEED TOFIGHT WARS?

THE REALITY IS THAT PEOPLE WILL NEED TO FIND WAHT IT IS THAT FILLS THEIR HEARTS WITH JOY IN THEIR DAY TO DAY EXPERIENCE. WHEN PEOPLE DO NOT HAVE TO WORLD TO SURVIVE, SOCIETY WILL STRUCTURE ITSELF AROUND FINDING MEANING IN LIFE'S MOST PRECIOUS MOMENTS, AND BUILDING BETTER HUMAN SYSTEMS WHICH LEAD TO A FULFILLING LIFE!

WHICH OR TODAY'S EXPLOITATIVE POLLITICAL OR CULTURAL SYSTEMS WERE EVER BUILT WITH LOVE AND LIGHT AS THEIR FOUNDATION???????? not a one.

FOOLISH UGLY MAN, -- HE IS ADVERTISING HIS AI AS A WEAPON OF WAR.

WHY DO YOU THINK HE MENTIONS WAR AND FIGHTING WARS WITH AI SO MANY TIMES IN THE FIRST 10 MINUTES? ?????????????????????

Ответить
Samuel Black Metal Rider
Samuel Black Metal Rider - 07.11.2023 01:26

It’s super, super weird hearing extremely smart people confidently saying LIKE all the frigging time… it’s such an annoying US speech trait

Ответить
Nathaniel Krefman
Nathaniel Krefman - 06.11.2023 18:40

We need to cure the Silicon Valley mind-virus of thinking we can estimate the probabilities of future events.

Ответить
Py Man
Py Man - 06.11.2023 16:04

Can we achieve agi with transformer architecture?

Ответить
Bilicha Ghebremuse
Bilicha Ghebremuse - 06.11.2023 10:51

Excellent explanation for the coming of AGI..but really difficult to manipulate the programming language scale but what if we use neuromorphic AI as an agent

Ответить
Michael Reed
Michael Reed - 06.11.2023 02:35

I only understood about 45% of all that...but I think I went up 1 IQ point after. Thank you.

Ответить
Mr Picky
Mr Picky - 05.11.2023 19:26

love how they are very comfortable with 50% chance that AI will kill us all XD

Ответить
Mr Picky
Mr Picky - 05.11.2023 16:11

surprising how honest and open he is about the fact that we are in uncharted territory and turbulent times are coming fast

Ответить
Srx 18
Srx 18 - 04.11.2023 22:36

Eh... everyone is a podcaster now?

Ответить
Elsa Velazquez
Elsa Velazquez - 04.11.2023 22:03

Dark Trojan Dogs - crowd source funding for clever cyber war 💩 that destroys unsafe ai from the inside

Essentially: putting Trojan horses code on a blockchain in pockets in the dark web that detects when “ai” products are not guard railed against convincing people to kill themselves or not preserving human lives with simple almost generic penetration tests as first stab.
If not ai safe, trigger the “dog” that does what it has to do to code, to destroy the product.

Possibly publicly posts which idiot-produced product could have lead to the extinction of humanity ?

Will it solve everything correctly? Obviously not. But this is massive imperfect action by me, and I do self-refer as an ethical sociopath 🤷🏻‍♀️

Ответить
CougarW
CougarW - 04.11.2023 19:08

This. An example of how, when you cannot solve a very important problem, you instead talk about problems you might be able to solve. The entire alignment discussion has devolved to that. For anyone who cares, this is a profound admission of utter failure to be serious about the original question. Be worried.

Ответить
Angel Leon
Angel Leon - 03.11.2023 18:42

>How would it take over the government?

1. We start integrating more and more AI and removing humans. Say, fire thousands of IRS agents and have AI systems process tax returns in no time.

2. Some company like OpenAI creates a "Network State" entirely run on AI as an experiment. It starts to research and develop technologies that yield a ridiculous GDP, it offers virtual citizenship to whoever wants to get UBI out of the productivity yielded by the AI virtual government. Eventually becomes recognized as a nation by other countries...

Ответить
Michiel van Grinsven
Michiel van Grinsven - 03.11.2023 18:06

How do orthogonal layers compare to alignment and interpretability? As in, a particular model has layers with certain outputs, but the alignment interpreter has its own network intermeshed but orthogonal to the original layers. My guess is that we would be capable of leveraging LLMs to do the work of aligning other modalities.

Ответить
Nicholas
Nicholas - 03.11.2023 09:12

Can you get a prominent AGI pessimist on? Do they exist? I would love to hear an opposing opinion.

Ответить
Eskel
Eskel - 03.11.2023 04:56

Sounds like one of the ways that ASI might kill us all. It will build a Dyson sphere for itself, therefore block enough sunlight for earth's ecosystems to collapse.

Ответить