Комментарии:
Thank you Tyler, Awesome as usual!
ОтветитьAny way to use AutoGen to login on the website and perform a job?
I mean the functionality where I can describe with the text to login on the specific website with my credentials and do specific tasks, without specifying manually CSS or XPath elements and without writing (or generating) code for Selenium or similar tools?
You earned a new subscriber and loyal follower, gentleman. Great speech modulation and clearness.
ОтветитьA wonderful beginner's tutorial. Thanks for providing the code as we can copy paste and test it quickly. Appreciate it.
ОтветитьCan I request the a code written in Next.js(typescript) or .NET(C#) or it is strictly working with Python?
ОтветитьHi Tyler I was following along with your repo and it vanished mid tutorial. Any ideas? Great work btw
ОтветитьThanks Tyler. I see you suggested goinng oai config instead of a .env and they both appear to do the same. What's the difference?
ОтветитьTyler, excellent video. I learned a lot. God bless you.
ОтветитьThank you for this excellent introduction! I have one question: I would like to have two agents performing an interview with a human on a certain topic. The first agent should ask the questions, while the second agent should reflect on their understanding of the topic and decide whether additional messages are needed. This seems like a good case for a Nested Chat. However, the nested chat seems to be bound to the amount of turns you define in the beginning. Is there a way to have the nested agent decide when to finish the interaction?
ОтветитьI usually do not comment, but I am commenting because you are just awsome. I was confused from last 3 days about LangGraph vs Autogen, but now you have completed my all doubt, with this video, thanks.
ОтветитьThis course is fantastic, thank you!!
ОтветитьThanks for sharing this video, it helps me a a lot.
I have one question. Is that possible to dynamically changing base prompt(system_message)?
"dynamically" means that I would like to know how to change system_message during conversation.
Amazing tutorial, very clear and packed full!
ОтветитьHi So I want to do this with a local run LLM how do I change [
{
"model": "gpt-3.5-turbo",
"api_key": "sk-proj-1111"
}
] to run with say LM studio with Llama 3
OK so sorry for the caps, mastered it now THANKS :) best thing you can do is talk a little slower though haha makes it hard to follow when it's all new EDIT = Just want to say anyone not getting this at 1st do it a few times and suddenly the penny will drop, just focus on why he's writting and get the structure understood then it's not so hard anymore :)
ОтветитьGreat explanation and summary of autogen !
ОтветитьHi - While using functions it is answering 2+2 = 4 then how is it different from Tools ? I am using your exact code from your git
ОтветитьHi, i commented on your other SAAS customer survey video i am using that code of you yours, and i keep getting "openai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'content' field that is not empty."}" error even though i have default_auto_reply:"..."
A different question, when i followed the exact path of your SAAS customer survey video it once ran the code, it even generated me an output, but couple things i see is:
- I cant see all those requirement interactiosn between the agents,
- and once the execution is done with code generation the last thing i get is the same error mentioned above
PLS HELP
How can i do it on autogenstudio? I am in the conda environment but it doesnt do it on my desktop
Ответитьlegend
Ответитьwhere can i find the full code?
ОтветитьHey Tyler! Amazing video
I am non technical, just wanted to know if I can use jupyter notebook instead of pycharm. If yes, do I need to create separate jsonn file to call Open AI API key like you did by for two way chat?
Thanks
Thanks for this awesome tutorial.
Looks like llama70 has some weird issue when trying the sequential chat. !!
I saw a video that had little game devs working together. I want to make a game in Unreal Engine using little AI agents as helpers, but im not entirely convinced that AI agents are all the way there yet (?) What all can be done in this regard, to your knowledge? Like, i need one agent to interact with me, as my liasian to the other agents, to help prioritize which agents operate / do tasks in / etc., one to give screen reading/ keyboard/ mouse control to, to operate some specific programs (unreal, browser, etc), one to scrape websites for data, one to compile that data into tables, one that can learn unreal engine, one that codes in multiple languages, one for front end, one for back end, one to operate a local ai image generation on my workstation to make 2d pictures for inventory item sprites, and ui design iterations for me to pick through, etc.
ОтветитьThis is truly a fantasic video, i have been trying to learn crewai and i just have crap results every time plus its allot more code, i think i am setting on autogen and your video has been a huge help thank you you just earn another sub :)
Ответитьis function calling same as adding skills to agents ? if not can you make a video on adding skills to agents in autogen
Ответитьwhen to use register_for_llm and register_for_execution ?
ОтветитьThanks a lot. Great tutorial. Learned a lot. 😍
So this is what I have observed, hoping you can help me with these:
1) the initiate_chat picks a random agent[0] who summarizes the task. I’d like the initiate_chat should always be started by group_admin bot.
2) Build_from_library() ignores agent_Ids in the position_list and makes up new agent_ids on the fly and generates their system prompt etc.
3) when I rerun the exact user prompt second and third time, it always creates/selects a different set of agents and solves the user query ina different way.
4) Sometimes it picks a random agent like “astronomer” for creating a social media post for Facebook, for example! 😮
5) The agents usually repeat the summary of previous agents output so when reading through the chatter there is insane amount of text messages to go through.
6) When all agents Terminate, there is no final summarization of result and it’s difficult to pick from message history where the final summary is available (since last few messages are “terminate” depending on the number of agents selected in the chat).
Maybe you can address some of these in a follow up video or DM me with suggestions on how I can address these issues?
Also if you can show how we can send the chatter back to web app service as and when each agent speaks? I am not well versed with websockets and how to integrate that within this code since once “initiate_chat” is called, the chatter is autonomous without break and we need a way to send last message from GroupChat messages to be sent back to the web app service.
🙏🏼
Another observation I saw: if I simply stated the user query as “who are you and what is your purpose?” - it starts the initiate_chat() with couple of agents and talk about life and AI etc whereas it should be intelligent enough to know that the question doesn’t really require a swarm of agents to respond to Hi / Hello / Who are you type of questions (which users invariably ask when starting a chat). How can we address these trivial queries within this framework and not spawn unnecessary agents and group chat?
ОтветитьQuick note: When I copy the code from your github repo, it says human_input_mode="ALWAYS". I didn't notice that and the code completed and asked me for input. I then noticed in the tutorial that you used human_input_mode="NEVER". I made this change and I was able to get the agents to "auto run". Perhaps others might run into this issue as well. Thanks for the tutorial!
ОтветитьGreat beginner's course on AutoGen! A solid starting point for those new to AI development, and an excellent way to build foundational knowledge."
Ответитьi dont have openAI keys as they are costly. But we have cohere & Gemini APIs offered free of cost. So it will be great if we can get same configurations via cohere & Gemini so that more people can practice freely.
ОтветитьThank you :) you are awesome
ОтветитьInorder to run phi-2 locally, what are the hardware requirements. I have a 8GB RAM with i3 processor
ОтветитьGreat!
ОтветитьOnly 11 minutes in, and this is great! Have subscribed!
Haha! My execution of the first script, it pip installed the yfinance library!
Thank you for creating this tutorial. When running the 01-twoway-chat program, it builds itself different than what yours did. First it tried to install the 'yfinance' library but failed so I manually installed it. Once installed I reran the program. It retrieved the stock prices of META and TESLA, then extracted the stock price changes and plotted the line chart using matplotlib.pyplot. It did not save a file anywhere. I then copied and pasted your code over my own but there was no change. I assume the difference is that in between the time you made your tutorial and me taking it, that OpenAI or autogen have updated something. Your opinion? I asked openai how to fix and I will give it a try but it basically has me manually creating a function for logging to a file as well as a second file for the plot. I'll give it a try but not sure how the changes they made would be considered progress since you have to add code. Normally they either leave things alone or make them better. Edit: I give up. I guess there have been too many changes in either autogen or openai since this tutorial was released. I'm not getting the same results even copying and pasting your own code from github. Oh well, thank you for your efforts anyway.
ОтветитьThis is awesome stuff. But I really recommend you guys to check the documentation :)
ОтветитьTyler I believe you have one of the best channels on YT for learning AI frameworks. I just finished watching your CrewAI & Autogen videos but I'm not sure when to pick one or the other. Maybe we could get a video explaining the differences in workflow?
Ответитьdoesn't work from the first 10 mins
xgboost issue
This is a great video. My confusion is this is not Autogen Studio 2.0, but simply Autogen with custom python files, is that correct?
ОтветитьHello, is it ok if i use llama 3 instead of openai with autogen? Would the method of function/tool calling be same with different llm model? Or it would be different method to implement function calling or tool calling? The tutorial was brilliant. Thanks
ОтветитьAny Idea? flaml.automl is not available. Please install flaml[automl] to enable AutoML functionalities.
ОтветитьHey! With this course from beginning to end, you will be familiar with AutoGen and be able to create your own ai agent workflow. Like, subscribe and comment 😀 Have a good day coding!
Ответить