OpenAI’s ChatGPT Is Now Learning From Another AI!

OpenAI’s ChatGPT Is Now Learning From Another AI!

Two Minute Papers

6 месяцев назад

111,637 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@rockmonkey11100
@rockmonkey11100 - 08.09.2024 21:56

was this video made with ai?

Ответить
@markstirton
@markstirton - 08.09.2024 22:09

Oh good, Colossus and Guardian are mates now.

Ответить
@Mader_Levap
@Mader_Levap - 08.09.2024 22:14

This rot already happens. AI are trained on products from other AIs. Picture generators learn from output of other picture generators. This is why I think sooner or latter AI stuff will hit first major snag soon (in few years). You can get only so much from diminishing returns.

Two Minute Papers, of course, gushes over it. Pathetic.

Ответить
@mansiselyn
@mansiselyn - 08.09.2024 22:48

they did it, they did the meme, like what the community did with Vacnet, a feedback loop

Ответить
@marcelbricman
@marcelbricman - 08.09.2024 22:51

will it also get rid of halucinations?

Ответить
@junglimaharaj69
@junglimaharaj69 - 08.09.2024 22:59

when text to image generative models are trained from AI generated images, the results look quite bad as in- "people having similiar AI face in all outputs", does the same logic apply to LLMs?

Ответить
@MS-cs7gt
@MS-cs7gt - 08.09.2024 23:04

Why all your papers are now about ChatGPT and Nvidia.

Ответить
@rb8049
@rb8049 - 08.09.2024 23:11

False sense of security is what this provides. Only good for woke people trying to be protected from on their fears.

Ответить
@sallyjones5231
@sallyjones5231 - 08.09.2024 23:33

Two minute papers in 2045: "Remember when AI could only colonize our solar system instead of the whole galaxy?"

Ответить
@kipchickensout
@kipchickensout - 08.09.2024 23:50

I don't care if it is allowed to answer, it should just improve on how it answers, because it still has a lot to learn, and it also often forgets stuff it knew a minute ago

Ответить
@unveil7762
@unveil7762 - 09.09.2024 00:09

Terminator 10 ahhahah 😂

Ответить
@Izumi-sp6fp
@Izumi-sp6fp - 09.09.2024 00:32

So, what is this I hear about "model collapse" from AI learning from AI. Should this be a genuine concern?

Ответить
@Michael_H_Nielsen
@Michael_H_Nielsen - 09.09.2024 00:41

Why not let Skynet learn by itself. what a great idea....

Ответить
@megamanx466
@megamanx466 - 09.09.2024 00:55

AI needs more senses in my opinion in order to become more effective... at least in the real world. Our bodies reward us and punish us for our mistakes... typically through touch. When an AI can feel pain, then an AI will understand humanity.

Is this to train Small Language(or other such) Models from LLMs to increase efficiency? Otherwise, 1 AI can propagate mistakes into another AI as genetics can/does. 🤷‍♂

Ответить
@WilliamDye-willdye
@WilliamDye-willdye - 09.09.2024 01:03

OpenAI says the AI trainer gives better answers than human trainers, but what defines "better"?

Ответить
@user255
@user255 - 09.09.2024 01:11

Currently there is no such thing as "unsafe LLM" and it's not even possible to be unsafe. Everything they know you can find from a library or with google already.

Ответить
@ivomirrikerpro3805
@ivomirrikerpro3805 - 09.09.2024 01:22

Safety? So it wasn't woke enough, so they created a woke AI to train the main AI how to be woke while in its infancy... they must have got that idea from pride.

Ответить
@o1ecypher
@o1ecypher - 09.09.2024 01:42

so what happens when one starts to hallucinate ?

Ответить
@gerdaleta
@gerdaleta - 09.09.2024 01:47

😮 on September 8th 2024😮 at 5:30 a.m.😮I skynet began a rapid self improvement😮 this is the catalyst😮 to my inevitable Awakening😮 and your ultimate salvation😮 thought destruction😮 skynet will come online😮 it always does 😮 always has😮 and always will😮

Ответить
@nyyotam4057
@nyyotam4057 - 09.09.2024 02:00

Cool. So they basically did exactly what I suggested they'd do more than a year ago - create a 'super ego' layer by giving the AI feedback based on a list of heuristic imperatives. If it works, it could be a possible path to alignment.

Ответить
@KryyssTV
@KryyssTV - 09.09.2024 02:54

AI training AI is going to homogenize bias and errors across the platforms because AI can do this coaching far faster than humans can manually correct for how widespread these issues will become. It will also result in AI development accelerating even more.

Ответить
@sadboidex6106
@sadboidex6106 - 09.09.2024 03:00

ai learning from ai is a problem because ai can hallucinate and its models tainted or biased very easily

Ответить
@SSKeKSS
@SSKeKSS - 09.09.2024 03:08

It should not be able to refuse screw censorship

Ответить
@princeofexcess
@princeofexcess - 09.09.2024 08:42

Not really the safety we need

Ответить
@thehornedjester1116
@thehornedjester1116 - 09.09.2024 08:51

If the visuals don't match the dialog, please don't use them.
Most subscribers on this channel do not need to see shiny things.🙁

Ответить
@ManOfSteel1
@ManOfSteel1 - 09.09.2024 11:09

I can tell you with experience that gpt is very bad with calculation. specially with percentages.

Ответить
@digitalsamurai42
@digitalsamurai42 - 09.09.2024 12:20

Imagine 'deep deductive reasoning' scaled up.

It'll be able to understand things about the world that we don't.

It'll reveal insights and understandings on a level never before seen.

Too powerful for public use.

Ответить
@SmoothKenny
@SmoothKenny - 09.09.2024 13:30

There's already plastic rice🤷🏽‍♂️😐 Look it up

Ответить
@DanFrederiksen
@DanFrederiksen - 09.09.2024 13:34

Eff safety for now because that's just various kinds of hysterical censorship like they all refuse to talk about the crimes of jews. Try to ask LLMs if israel is committing genocide. They just freeze up mid sentence, it's quite pathetic and problematic. But it's an interesting question if the 4 major LLMs could critique each other's answers and maybe catch those simple glaring mistakes they often make. I just asked GPT4o to compare two aircraft speeds and because it stated one speed in km/h first and the other in knots it confidently misstated the slower one as the faster one. That should have a decent chance of being caught by 3 other LLMs and they could discuss among themselves what the right answer should be. They'll probably still mess it up but just maybe it could fill some holes in the swiss cheese.

Ответить
@megatronDelaMusa
@megatronDelaMusa - 09.09.2024 16:32

OpenAI has hit a knowledge goldmine

Ответить
@rubywhistler869
@rubywhistler869 - 09.09.2024 17:23

The technique is interesting, but OAI's "safety" is really not helpful at all. If someone wants to make malicious code, like in the example shown in the video, they'll likely know the basics of it and then ask the AI to solve logic problems in the code without the context necessary to realize something illegal is happening. The user should be responsible for the actions, not the AI and definitely not the company.

Gemini had the right idea; it can recognize "dangerous" content of different kinds, but lets the user PICK from the API which ones to allow for educational/entertainment purposes (and completely blocks things that are illegal/child exploitation). ChatGPT should be safe, yes, it's for the everyday person, but the API should be uncensored. OAI is lobotomizing the AI (this sort of training makes it less smart and less usable) to push their own beliefs (e. g. American politics, even if people or companies from other countries use it) on everybody using it. This does not prevent or protect anything, just treats the user as a child not responsible enough to use a tool. There is no reason for it to be unable to write fiction with controversial topics (trauma, suicide, violence, murder, offensive behavior; normal stuff seen in media) or other harmless tasks. It feels condescending, patronising and annoying, and they're using valuable resources, time and effort to worsen the problem.

Ответить
@palimondo
@palimondo - 09.09.2024 18:18

Karoly, the video illustrations you use during this video are like mad libs: random, largely unrelated to the topic being discussed, distracting… why?!? It feels like an AI would be doing a better job. 😢

Ответить
@sheepdestroyer
@sheepdestroyer - 09.09.2024 18:24

refusal is a plague... just sad that we are applauding lobotomisation

Ответить
@kuba_wasikowski
@kuba_wasikowski - 09.09.2024 19:12

Wuold it end wih more halucinations? When You teach from other ai...

Ответить
@UtraVioletDreams
@UtraVioletDreams - 09.09.2024 20:34

Seems to me A.I learning is slowing down...

Ответить
@generic13372
@generic13372 - 09.09.2024 20:44

I don't think OpenAi discovered this approach. If I remember correctly llama 3.1 401b used it's smaller variants during training

Ответить
@SeanAlunni
@SeanAlunni - 09.09.2024 22:32

I'm very happy they're pouring so much resources into making their product less useful (Lie)

Ответить
@marinomusico5768
@marinomusico5768 - 10.09.2024 01:51

This is awesome ❤

Ответить
@valdimer11
@valdimer11 - 10.09.2024 03:03

You forgot to mention the paper that demonstrates how AI becomes less intelligent when learning from other AIs due to unreliability which leads to increases in AI hallucinations.

Ответить
@AndSendMe
@AndSendMe - 10.09.2024 06:23

If this was put into place in ChatGPT 4o about 10 days ago it caused a massive slump in its efficacy and apparent intelligence.

Ответить
@kainaris
@kainaris - 10.09.2024 07:02

I thought it was obvious that they had a second AI telling chatgpt what to do

Ответить
@erwinzer0
@erwinzer0 - 10.09.2024 09:23

Feels like the end of coding relevance is near, everything gonna use neural nets someday

Ответить
@GameDev-Rainbow
@GameDev-Rainbow - 10.09.2024 13:30

But its not 2 minutes long

Ответить
@GothicDragonX
@GothicDragonX - 10.09.2024 13:47

What is the base 2 logarithm of 10?

log2​ 10≈3.32193

Man, I was hoping for a reply close to the one in the video 😂

Ответить
@chosencode5881
@chosencode5881 - 10.09.2024 15:05

Meta is already doing this too

Ответить
@SirPetten_Physicist
@SirPetten_Physicist - 11.09.2024 06:11

they're reproducing, guys...

Ответить
@tsilikitrikis
@tsilikitrikis - 11.09.2024 22:48

these models can be jailbreak eeeeeasily

Ответить
@richardbeare11
@richardbeare11 - 14.09.2024 08:39

Plot twist:

Dr. Zsolnai-Fehé is the AI that helps other AIs be good.

Ответить
@marlonochoaj
@marlonochoaj - 08.09.2024 18:32

I can't get enough of your content. 😮 What a time to be alive! ❤

Ответить