Комментарии:
was this video made with ai?
ОтветитьOh good, Colossus and Guardian are mates now.
ОтветитьThis rot already happens. AI are trained on products from other AIs. Picture generators learn from output of other picture generators. This is why I think sooner or latter AI stuff will hit first major snag soon (in few years). You can get only so much from diminishing returns.
Two Minute Papers, of course, gushes over it. Pathetic.
they did it, they did the meme, like what the community did with Vacnet, a feedback loop
Ответитьwill it also get rid of halucinations?
Ответитьwhen text to image generative models are trained from AI generated images, the results look quite bad as in- "people having similiar AI face in all outputs", does the same logic apply to LLMs?
ОтветитьWhy all your papers are now about ChatGPT and Nvidia.
ОтветитьFalse sense of security is what this provides. Only good for woke people trying to be protected from on their fears.
ОтветитьTwo minute papers in 2045: "Remember when AI could only colonize our solar system instead of the whole galaxy?"
ОтветитьI don't care if it is allowed to answer, it should just improve on how it answers, because it still has a lot to learn, and it also often forgets stuff it knew a minute ago
ОтветитьTerminator 10 ahhahah 😂
ОтветитьSo, what is this I hear about "model collapse" from AI learning from AI. Should this be a genuine concern?
ОтветитьWhy not let Skynet learn by itself. what a great idea....
ОтветитьAI needs more senses in my opinion in order to become more effective... at least in the real world. Our bodies reward us and punish us for our mistakes... typically through touch. When an AI can feel pain, then an AI will understand humanity.
Is this to train Small Language(or other such) Models from LLMs to increase efficiency? Otherwise, 1 AI can propagate mistakes into another AI as genetics can/does. 🤷♂
OpenAI says the AI trainer gives better answers than human trainers, but what defines "better"?
ОтветитьCurrently there is no such thing as "unsafe LLM" and it's not even possible to be unsafe. Everything they know you can find from a library or with google already.
ОтветитьSafety? So it wasn't woke enough, so they created a woke AI to train the main AI how to be woke while in its infancy... they must have got that idea from pride.
Ответитьso what happens when one starts to hallucinate ?
Ответить😮 on September 8th 2024😮 at 5:30 a.m.😮I skynet began a rapid self improvement😮 this is the catalyst😮 to my inevitable Awakening😮 and your ultimate salvation😮 thought destruction😮 skynet will come online😮 it always does 😮 always has😮 and always will😮
ОтветитьCool. So they basically did exactly what I suggested they'd do more than a year ago - create a 'super ego' layer by giving the AI feedback based on a list of heuristic imperatives. If it works, it could be a possible path to alignment.
ОтветитьAI training AI is going to homogenize bias and errors across the platforms because AI can do this coaching far faster than humans can manually correct for how widespread these issues will become. It will also result in AI development accelerating even more.
Ответитьai learning from ai is a problem because ai can hallucinate and its models tainted or biased very easily
ОтветитьIt should not be able to refuse screw censorship
ОтветитьNot really the safety we need
ОтветитьIf the visuals don't match the dialog, please don't use them.
Most subscribers on this channel do not need to see shiny things.🙁
I can tell you with experience that gpt is very bad with calculation. specially with percentages.
ОтветитьImagine 'deep deductive reasoning' scaled up.
It'll be able to understand things about the world that we don't.
It'll reveal insights and understandings on a level never before seen.
Too powerful for public use.
There's already plastic rice🤷🏽♂️😐 Look it up
ОтветитьEff safety for now because that's just various kinds of hysterical censorship like they all refuse to talk about the crimes of jews. Try to ask LLMs if israel is committing genocide. They just freeze up mid sentence, it's quite pathetic and problematic. But it's an interesting question if the 4 major LLMs could critique each other's answers and maybe catch those simple glaring mistakes they often make. I just asked GPT4o to compare two aircraft speeds and because it stated one speed in km/h first and the other in knots it confidently misstated the slower one as the faster one. That should have a decent chance of being caught by 3 other LLMs and they could discuss among themselves what the right answer should be. They'll probably still mess it up but just maybe it could fill some holes in the swiss cheese.
ОтветитьOpenAI has hit a knowledge goldmine
ОтветитьThe technique is interesting, but OAI's "safety" is really not helpful at all. If someone wants to make malicious code, like in the example shown in the video, they'll likely know the basics of it and then ask the AI to solve logic problems in the code without the context necessary to realize something illegal is happening. The user should be responsible for the actions, not the AI and definitely not the company.
Gemini had the right idea; it can recognize "dangerous" content of different kinds, but lets the user PICK from the API which ones to allow for educational/entertainment purposes (and completely blocks things that are illegal/child exploitation). ChatGPT should be safe, yes, it's for the everyday person, but the API should be uncensored. OAI is lobotomizing the AI (this sort of training makes it less smart and less usable) to push their own beliefs (e. g. American politics, even if people or companies from other countries use it) on everybody using it. This does not prevent or protect anything, just treats the user as a child not responsible enough to use a tool. There is no reason for it to be unable to write fiction with controversial topics (trauma, suicide, violence, murder, offensive behavior; normal stuff seen in media) or other harmless tasks. It feels condescending, patronising and annoying, and they're using valuable resources, time and effort to worsen the problem.
Karoly, the video illustrations you use during this video are like mad libs: random, largely unrelated to the topic being discussed, distracting… why?!? It feels like an AI would be doing a better job. 😢
Ответитьrefusal is a plague... just sad that we are applauding lobotomisation
ОтветитьWuold it end wih more halucinations? When You teach from other ai...
ОтветитьSeems to me A.I learning is slowing down...
ОтветитьI don't think OpenAi discovered this approach. If I remember correctly llama 3.1 401b used it's smaller variants during training
ОтветитьI'm very happy they're pouring so much resources into making their product less useful (Lie)
ОтветитьThis is awesome ❤
ОтветитьYou forgot to mention the paper that demonstrates how AI becomes less intelligent when learning from other AIs due to unreliability which leads to increases in AI hallucinations.
ОтветитьIf this was put into place in ChatGPT 4o about 10 days ago it caused a massive slump in its efficacy and apparent intelligence.
ОтветитьI thought it was obvious that they had a second AI telling chatgpt what to do
ОтветитьFeels like the end of coding relevance is near, everything gonna use neural nets someday
ОтветитьBut its not 2 minutes long
ОтветитьWhat is the base 2 logarithm of 10?
log2 10≈3.32193
Man, I was hoping for a reply close to the one in the video 😂
Meta is already doing this too
Ответитьthey're reproducing, guys...
Ответитьthese models can be jailbreak eeeeeasily
ОтветитьPlot twist:
Dr. Zsolnai-Fehé is the AI that helps other AIs be good.
I can't get enough of your content. 😮 What a time to be alive! ❤
Ответить