Комментарии:
Simple. Don't want Ai to take over? Three letters. EMP.
ОтветитьThere's only war because every country isn't a democracy. Democracies almost never go to war with each other.
This is obvious yet almost nobody talks about this anymore.
I CANNOT TOLERATE this man's opinions. they're SO SHORT SIGHTED!!!
When he speaks about regulating the use of AI agents and give an example of a persuasion campaign, the question you FAIL TO ASK IS WHY someone might want to run a persuasion campaign in a world without material suffering or scarcity, and a world without the representation of money as we understand it today???
It seems absurd to see this man contort his cranium and yet still fail to understand his own words. He speaks of a world without material scarcity or money, and yet fails to think even a little about how the social impacts are rammified. WHAT DO PEOPLE DO WITH THEMSELVES WHEN THEY HAVE UBI>??>? when you dont NEED to work, what do you do? and when most of the jobs that suck are taken by AI, WHAT DO YOU DO?? WHEN THERE ARE NO "BAD JOBS" anymore, and you are free to find osmething that expresses yourself.
WHEN EDUCATION IS FREE AND UNLIMITED BY AI, KNOW ANYTHIGN YOU WANT. WEHN THE EDUCATION SYSTEM PREPARES STUDENTS FOR THE WORLD, NOT FOR THE WORKFORCE. THEN WHAT WILL PEOPLE DO?
WHY WOULD THEY FEEL THE NEED TO OTHERS? WHY THE FUCK DO WE NEED AI FIGHTION WARS FOR US? WHY ARE WARS FAOUGHT?
WHY ARE WARS FAUGHT? __ MATERIAL ____ SCARCITY ____ POLLITICAL INFLUENCE FOR _____ MATERIAL _____ SCARCITY_______
SO IN A WORLD WITHOUT MATERIAL SCARCITY, WHY WOULD HUMANS NEED TOFIGHT WARS?
THE REALITY IS THAT PEOPLE WILL NEED TO FIND WAHT IT IS THAT FILLS THEIR HEARTS WITH JOY IN THEIR DAY TO DAY EXPERIENCE. WHEN PEOPLE DO NOT HAVE TO WORLD TO SURVIVE, SOCIETY WILL STRUCTURE ITSELF AROUND FINDING MEANING IN LIFE'S MOST PRECIOUS MOMENTS, AND BUILDING BETTER HUMAN SYSTEMS WHICH LEAD TO A FULFILLING LIFE!
WHICH OR TODAY'S EXPLOITATIVE POLLITICAL OR CULTURAL SYSTEMS WERE EVER BUILT WITH LOVE AND LIGHT AS THEIR FOUNDATION???????? not a one.
FOOLISH UGLY MAN, -- HE IS ADVERTISING HIS AI AS A WEAPON OF WAR.
WHY DO YOU THINK HE MENTIONS WAR AND FIGHTING WARS WITH AI SO MANY TIMES IN THE FIRST 10 MINUTES? ?????????????????????
It’s super, super weird hearing extremely smart people confidently saying LIKE all the frigging time… it’s such an annoying US speech trait
ОтветитьWe need to cure the Silicon Valley mind-virus of thinking we can estimate the probabilities of future events.
ОтветитьCan we achieve agi with transformer architecture?
ОтветитьExcellent explanation for the coming of AGI..but really difficult to manipulate the programming language scale but what if we use neuromorphic AI as an agent
ОтветитьI only understood about 45% of all that...but I think I went up 1 IQ point after. Thank you.
Ответитьlove how they are very comfortable with 50% chance that AI will kill us all XD
Ответитьsurprising how honest and open he is about the fact that we are in uncharted territory and turbulent times are coming fast
ОтветитьEh... everyone is a podcaster now?
ОтветитьDark Trojan Dogs - crowd source funding for clever cyber war 💩 that destroys unsafe ai from the inside
Essentially: putting Trojan horses code on a blockchain in pockets in the dark web that detects when “ai” products are not guard railed against convincing people to kill themselves or not preserving human lives with simple almost generic penetration tests as first stab.
If not ai safe, trigger the “dog” that does what it has to do to code, to destroy the product.
Possibly publicly posts which idiot-produced product could have lead to the extinction of humanity ?
Will it solve everything correctly? Obviously not. But this is massive imperfect action by me, and I do self-refer as an ethical sociopath 🤷🏻♀️
This. An example of how, when you cannot solve a very important problem, you instead talk about problems you might be able to solve. The entire alignment discussion has devolved to that. For anyone who cares, this is a profound admission of utter failure to be serious about the original question. Be worried.
Ответить>How would it take over the government?
1. We start integrating more and more AI and removing humans. Say, fire thousands of IRS agents and have AI systems process tax returns in no time.
2. Some company like OpenAI creates a "Network State" entirely run on AI as an experiment. It starts to research and develop technologies that yield a ridiculous GDP, it offers virtual citizenship to whoever wants to get UBI out of the productivity yielded by the AI virtual government. Eventually becomes recognized as a nation by other countries...
How do orthogonal layers compare to alignment and interpretability? As in, a particular model has layers with certain outputs, but the alignment interpreter has its own network intermeshed but orthogonal to the original layers. My guess is that we would be capable of leveraging LLMs to do the work of aligning other modalities.
ОтветитьCan you get a prominent AGI pessimist on? Do they exist? I would love to hear an opposing opinion.
ОтветитьSounds like one of the ways that ASI might kill us all. It will build a Dyson sphere for itself, therefore block enough sunlight for earth's ecosystems to collapse.
Ответить