How I Use ChatGPT Vision As a Data Analyst

How I Use ChatGPT Vision As a Data Analyst

The PyCoach

8 месяцев назад

3,457 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@superzero8874
@superzero8874 - 10.12.2023 09:44

Hi Frank Andrade can you make a video for Python script to edit a file (like song or video) metadata (like Author,Album,Artist and all)

Ответить
@matrss
@matrss - 29.10.2023 12:47

I disliked this video because you failed to check the output of the LLM for correctness and even recommended its usage in situations in which someone is not able to check the correctness at all (i.e. when using it for graphs you do not understand). If you did check you would have noticed that the "quick summary" the LLM gave in the beginning is objectively completely wrong. For example in "2. Revenue and Profit Trend" the LLM says "Revenue has been relatively stable, with a peak in Q3-2021 and a slight dip in Q1-2022." If you check the chart you will notice that actually it is the opposite, Q1-2022 has the peak and there is a very slight dip in Q3-2021. Also, the LLM says: "The profit margin saw a peak in Q3-2021 and has been declining since." This is nonsense as well, it was not declining since then. If you check section 5 of the quick summary you will notice even more obvious mistakes, mentioning products that don't exist in the chart and numbers that don't belong to the given product.

Similarily, if you had checked the output the LLM produces for the handwritten formulas you would have noticed that the generated latex code does not match the given formulas at all. The chalkboard image for example has a wrong placement of the parenthesis in the LLM output. The other image of handwritten formulas has not much similarity with the LLM output at all.

There is no reason to assume that the output would be qualitatively better on charts you do not even understand yourself. It's just that you wouldn't even know if it made sense or not.

Therefore, you should always treat the output of a LLM as random nonsense, as long as you did not check that this particular output does make some sense. After all, LLMs are nothing else than random generators which just so happen to produce mostly coherent text. Advising someone to take this output at face value without even the ability to check it for correctness is dangerous at best.

Ответить
@moellerseo
@moellerseo - 22.10.2023 12:34

Thanks brother . Great video

Ответить
@t2p5g4
@t2p5g4 - 19.10.2023 01:25

If you zoomed out further you could make it even harder to read.

Ответить
@andradelaberiano1256
@andradelaberiano1256 - 19.10.2023 00:25

Thank you very much

Ответить
@hadijannat4821
@hadijannat4821 - 18.10.2023 22:33

Thank you very much sir

Ответить