Dr Jose A B. on LinkedIn: Financial Statement Analysis with Large Language Models | BFI | 27 comments (2024)

Dr Jose A B.

Software developer. Political Economist.

  • Report this post

In this beautiful paradox of a paper, researchers from the University of Chicago found GPT4 incurred an outrageous 40% error rate when predicting earnings from financial statements.Which, in turn, translates to 7% better than humans.Make your own conclusions.;)...#AI #technology #research #businessSummary: https://lnkd.in/dzfEFren Paper: https://lnkd.in/dfaXB-AG

Financial Statement Analysis with Large Language Models | BFI https://bfi.uchicago.edu

17

27 Comments

Like Comment

Davor Magdic

Enter The Arena™

4d

  • Report this comment

How about looking at the people who did the research? The first author in the paper they refer to, one Alex G. Kim of UChicago, published this paper in APRIL of 2023: "Bloated Disclosures: Can ChatGPT Help Investors Process Information?", whose abstract ends with,"Finally, we show that the model is effective at constructing targeted summaries that identify firms’ (non-)financial performance. Collectively, our results indicate that generative AI adds considerable value for investors with information processing constraints."We all know that ChatGPT in April of 2023 was crap, and that nothing came out of it regarding financial analysis or much else. Boston Consulting Group (BCG)'s October 2023 paper showed that analysts using ChatGPT for firm analysis, even with the so-called "supervised" prompting group, showed NEGATIVE productivity over the control group, to the tune of 20%.My conclusion is, Alex G. Kim wrote a BS paper then, there is no reason why it would be any different this time around.

Like Reply

3Reactions 4Reactions

Richard Self

Leadership and Keynote Speaker and member of the Data Science Research Centre at University of Derby

5d

  • Report this comment

It isn't worth bothering with. Not very surprising, given that LLMs cannot reason or do arithmetic.Part of the point is that we do not actually know the future, which is why both humans and LLMs can't forecast the future.

Like Reply

9Reactions 10Reactions

Julius Reynolds-Canilli

Family Office CIO | Founder @ FMG

4d

  • Report this comment

That 7% difference is potentially worth billions.

Like Reply

1Reaction 2Reactions

Tim Dasey, Ph.D.

Education for an AI world ~ Keynote speaker ~ AI Strategy and policy ~ Curriculum Development ~ Professional Dev. ~ Educational Gaming ~ Author

4d

  • Report this comment

The absolute performance level is also about the difficulty of the task. The interesting part is got-4 did better than custom ai investment models, which had been doing better than people for some time.

Like Reply

2Reactions 3Reactions

See more comments

To view or add a comment, sign in

More Relevant Posts

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    Amidst the chaos caused by ChatGPT (& others) outage, I am happy to confirm LOKAL is still working.Not a glitch. Because it works directly on your computer..Yet another reason to go LOKAL!.Don't need transcriptions at this time of life?No worries, get in touch about other edge AI tools I have on the back-burner due to lack of the millions of dollars needed to move faster in this space.😎 #AI #edgeAI #technologyhttps://lnkd.in/gR3pvnB3

    GitHub - jbolns/LOKAL_transcriptions: Edge AI > AI app to easily perform transcriptions on regular computers. Quality on par with on-cloud alternatives. Lower costs. Reduced privacy risks. github.com

    3

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    Are you a sustainability expert who is starting to be concerned about the enormous environmental impact of digital technologies?Welcome to a very lonely place!🤣 #sustainability #digitalsustainability #ai #sdgs #esg

    • Dr Jose A B. on LinkedIn: Financial Statement Analysis with Large Language Models | BFI | 27 comments (16)

    10

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    A recent paper shedding doubt on the claim that LLMs can "reason", found via the skeptic side of my network.The underlying definition of reasoning seems to be that reasoning is more than information retrieval.- Under an extreme version of said view, humans reason only when deviating from pre-established scripts.- Under a less radical version of said view of reasoning, humans reason at all times by choosing to stick or deviate from standing scripts.It's a good threshold for reasoning.As far as thresholds go.This binary (no/yes) way to look at reasoning is gaining popularity. It is NOT, however, how reason is conceived by those who see traces of reasoning in LLMs. On this side of the debate, reason is seen in more fluid terms, a kind of a spectrum.#AI #reasoning #AGI #intelligencePs. It matters little on which side of the debate I am, but I have said it several times. I am a panpsychist at the theoretical level and a functionalist at the practical one.https://lnkd.in/di3TMbRD

    On the Brittle Foundations of ReAct Prompting for Agentic Large Language Models arxiv.org

    4

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    LOKAL does NOT use LLMs. That's in purpose.https://lnkd.in/gR3pvnB3As we can see now as the result of a certain company's experimental search features, LLMs can make somewhat inconvenient mistakes.This is fine in some settings.But when you use transcriptions for #research or #analytics, small mistakes matter..This is not to say I condone the recent bashing of LLMs. Well implemented, they can help build great things.But you gotta know when and how..Need transcriptions?Check LOKAL.I know what I'm doing..Need AI services.Get in touch.I know what I'm doing.#AI #responsibleAI #AIethics #LLMs #qualitative #digitaltransformation

    GitHub - jbolns/LOKAL_transcriptions: Edge AI > AI app to easily perform transcriptions on regular computers. Quality on par with on-cloud alternatives. Lower costs. Reduced privacy risks. github.com

    5

    1 Comment

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    We have entered a stage of disappointment with LLMs. What's next? JEPAs!? What's a JEPA, anyway? ...➡ Hypotheses are hypothesesWhile based on tons of research and accumulated knowledge, actions and statements about the future are ALWAYS hypotheses.Let's not discard Sutskever and the like.Also, as I wrote yesterday when addressing the 1st part of the interview linked in this post, we can acknowledge the limitations of any technology without falling into a graceless bashing of technology and innovation.https://lnkd.in/g59mhNP6Having said that, I would also listen to LeCun and keep an eye on Meta.The combination of brain and GPU power currently taking place at Meta is outstanding.➡ TGFI... Eh, I mean, JEPA (Joint Embedding Predictive Architectures)LeCun's hypothesis is that JEPAs are a promising alternative to LLMs.https://lnkd.in/gZnw4F-UWhat's a JEPA?-> JEPAs depart from the idea that intelligence is built upon (visual) observation of the world. They are visual models.-> Similarly to humans, JEPAs start by eliminating irrelevant information from an observation (to avoid going crazy, I imagine).-> What comes next is a bit of a fill-into-the-gaps exercise where you garble the banana out of part (or a copy) of the observation, then train the thing to predict the un-garbled section (or version) from the garbled one.➡ Use casesA trained JEPA can imagine sections of an image or video from other sections.https://lnkd.in/geM8vQSNhttps://lnkd.in/gwQRvcEZThis gives it the ability to represent* things not directly observed.-> A segment of a photo lost colour: JEPAs can infer the colour from the rest of the image (since you can do it many times, it also works for video).-> A second of a video was lost: JEPAs can infer the second from video before and after the missing part.This might give it the ability to PLAN for things.-> Driving: JEPAs could help machines do what humans do when driving, i.e., image and plan for possible events.-> Walking: JEPAs could help machines do what humans do when walking, i.e., not fall (haha!).JEPAs might be able to do this in (near) real-time, thanks to inferences taking place after irrelevant information is discarded.➡ And how is this different to auto-regressive modelling?Tomato. Tomato.Haha, not really, but I have run out of space and need to leave something for the haters to engage with.But I will make one note.If you watch the interview below, LeCun explicitly says the real winner is a combination of visual and language (min 7).Let's not be dramatic about the whole thing. #AI #LLMs #JEPAs #technology* Note the word "represent" rather than reconstruct. What JEPAs do is like making a hypothesis. A representation of reality. Not reality.

    Yann LeCun: Limits of LLMs | Lex Fridman Podcast Clips

    https://www.youtube.com/

    3

    10 Comments

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    We have entered a stage of disappointment with LLMs. Here's why disappointment is warranted but, at once, not to be exaggerated or used to bash AI.➡ There's a lot to AI beyond GPT and the like.Say you don't like the products of a certain famous companyor even a full category of products; there's a lot more to choose from.Let's be mindful of what any technology can and cannot do.But let's avoid spurious generalisations.➡ LLMs will unlikely reach superintelligence, but this is hardly news.THE man himself, i.e., LeCun, said it (edit. typo), LLMs miss some of the key components that seem necessary to think of superintelligence.https://lnkd.in/dqE9M8Dt It's not the first time he says something to that end. It's also quite a widespread rationale.Many people see LLMs in a similar manner, no doubt partially thanks to LeCun's willingness to discuss these matters in accessible terms. For instance, while I advocate for a non-extremist version of the idea of superalignment where we already start aligning human and machine interests inasmuch as possible, I also believe and have noted repeatedly that whereas human intelligence is characterised by efficiency, LLMs are quite inefficient, and that deep multimodal capacities are needed to conceive of the kind of intelligence humans have.*Let's be mindful of LLMs' limitations, but let's not misrepresent AI proponents as somewhat thinking or suggesting LLMs are all we need. This is absurd!➡ LLMs do not need to be superintelligent to add value If you listen to the interview, LeCun also explicitly notes that this "is not to say that autoregressive LLMs are not useful – they are certainly useful; that they are not interesting; that we cannot build a whole ecosystem of applications around them – of course we can."LLMs are products like any other. You can use them for some things. They will not do great at other things.This is why you hire an expert instead of asking the LLM to implement itself.➡ Finally, the controversial one, LLMs have leapfrogged AIA few years ago we were all making bets on whether computers could ever beat a Turing test.LLMs smashed this threshold, and then more.This is hardly something one could call a fundamentally flawed technology.It's nice to examine things with the benefit of hindsight, and we should do it, yes. But let's avoid denying history.🦉 Thank you for coming to my TED talk. #AI #AGI #superintelligence #AIethics* BIG NOTE. Remember I am a panpsychist. The paragraph makes reference to human intelligence, not "intelligence". I do not share this absurd view of the mind being something that is turned on/off after a given threshold. The question of whether LLMs can reason (primitively) or if there is (some) intelligence in them is different to the question of whether they can reach the level of human intelligence or be superintelligent.

    LLMs are not superintelligent | Yann LeCun and Lex Fridman

    https://www.youtube.com/

    5

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    Today, I want to point out that the UK might still surprise everyone with an approach to AI regulation that actually balances safety and innovation.I am not speaking specifically about Sunak's current approach.I believe that Sunak's approach to AI regulation makes sense, especially if compared to the EU's approach. If continued, it might deliver.But I expect Labour to come to power and depart from it.So, I have been reading about what a Labour approach to AI could look like.Ideological differences notwithstanding, it makes sense. At the very least, it is something a pro-AI pro-innovation person like me could live with.🇬🇧 Starmer's and Labour's official position.- https://lnkd.in/dH_BEsRC- https://lnkd.in/dP-Y9gNy- https://lnkd.in/ddcCaAdiI find it stern but, at the same time, grounded and constructive.For instance, when considering the potential for widespread job losses, Starmer also considers the need for upskilling and training. Furthermore, the need to ensure regulations cut rather than add red tape is explicitly noted. Finally, I found his response to the deepfake to be fairly pragmatic.It is also worth noting that the regulatory agenda is being taken as part of a broader strategy rather than as the objective itself.🇬🇧 Alternatively, you might also want to check TBI's website for a taste of what other prominent Labour figures are thinking.- https://lnkd.in/dBppbqA5- https://lnkd.in/dFYvjFvG- https://lnkd.in/d_5zWTbRThe views there are also quite aware of the UK's need to reignite innovation. In fact, they explicitly emphasise the opportunities available in AI (rather than simply acknowledging them in passing or, worse, after the fact).Furthermore, I welcome the interest in a framework that is adaptive and improves iteratively (as opposed to some monolithic treaty, one imagines).🤖 Can the UK deliver an AI strategy that actually balances safety and innovation?Maybe, maybe not.But as I have long said here. If you are going to regulate AI, you might as well do it right.While one cannot know if things turn draconian if Labour comes to power, there is room for hoping for a mature approach that cares about safety without forgetting that innovation saves lives and economies.... Pity, I don't live in the UK any more. 🤣 Thanks Nick Cowen for helpful pointers and sobering advice.#AI #UK #AIregulation #AIethics #technology

    Starmer emphasises AI regulation to safeguard jobs uk.news.yahoo.com

    9

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    Microsoft is moving #AI to the #edge! I have been working on edgeAI for a bit. This is what I would ask Satya Nadella if I had a chance...🖼 The contextMy interest on edge AI is anything but superficial. I even have a product that enables users to do AI locally without having to share sensitive data.https://lnkd.in/gR3pvnB3Not to say I am upset that Apple and Microsoft are moving this way.Quite the contrary.I believe edge AI is a net gain for humanity. I want to see it happen.But while I believe there is much profit to be made, I also think there will be pressing profit/privacy trade-offs to consider. And so, I wonder...⁉ The questionsSatya noted in this interview that "we are at the very early stages of understanding how our relationship with AI agents should be shaped by us". This relation will be strongly shaped by product features, too.I love Microsoft products, I really do, but I wouldn't say data-sharing features are particularly easy to manage, let alone shape.Therefore...Q1. Can I really trust Microsoft to give me clear and meaningful choice if this ever gets in the way of short-term profits?Q2. Why? How?Q3. What concrete actions are being taken to avoid privacy-related confusion with edge AI features?Ilya Venger#edgeAI #AI #privacy #dataethics #AIethics

    Microsoft vs. Apple: Satya Nadella Says AI-Focused Copilot+ PCs Beat Macs | WSJ

    https://www.youtube.com/

    6

    6 Comments

    Like Comment

    To view or add a comment, sign in

  • Dr Jose A B.

    Software developer. Political Economist.

    • Report this post

    SHE is HER! OpenAI got this one wrong. But while we might see a golden parachute opening, reputational damage fears (or hopes) seem overstated.As I have said before, while OpenAI is building outstanding products, its governance is a bit of a mess.This matter, in particular, is dangerous stuff. If anyone, and I mean ANYONE, gets away with profiting from tangentially impersonating someone, we are all in deep problems. What's next? Someone makes a 3D model that looks just like you less a mole and whatnot and gets to do anything it wants?That's really worrying.I therefore admire Scarlett Johansson for standing for what is right. Note that I am intentionally using the word 'admire', not something else like 'commend'. It takes courage. I am looking at her as an example to follow, not as someone who needs my approval.But...OpenAI is making products people need or, at the very least, want.The products are very good or, at the very least, best in class.We might get to see Altman's Golden Parachute one of these days. Honesty, I'm starting to believe this is what he wants. He even pointed the guns at himself with the tweet. Plus, he's clearly in the next chapter already, even looking for funding for his next venture.But if you want OpenAI gone, you will have to code a product better than theirs.That's just the way life is. !!! BIG NOTE. Post edited for focus. I went too abstract with it initially. My bad.#AI #AIethics #technology

    Scarlett Johansson says she is 'shocked, angered' over new ChatGPT voice npr.org

    17

    48 Comments

    Like Comment

    To view or add a comment, sign in

Dr Jose A B. on LinkedIn: Financial Statement Analysis with Large Language Models | BFI | 27 comments (45)

Dr Jose A B. on LinkedIn: Financial Statement Analysis with Large Language Models | BFI | 27 comments (46)

992 followers

  • 315 Posts
  • 2 Articles

View Profile

Follow

More from this author

  • Don't push the AI robots or they will kill us all! Dr Jose A B. 9mo
  • Open challenge - Explain risk/uncertainty without using the coin example Dr Jose A B. 6y

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
Dr Jose A B. on LinkedIn: Financial Statement Analysis with Large Language Models | BFI | 27 comments (2024)
Top Articles
Latest Posts
Article information

Author: Saturnina Altenwerth DVM

Last Updated:

Views: 6333

Rating: 4.3 / 5 (64 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Saturnina Altenwerth DVM

Birthday: 1992-08-21

Address: Apt. 237 662 Haag Mills, East Verenaport, MO 57071-5493

Phone: +331850833384

Job: District Real-Estate Architect

Hobby: Skateboarding, Taxidermy, Air sports, Painting, Knife making, Letterboxing, Inline skating

Introduction: My name is Saturnina Altenwerth DVM, I am a witty, perfect, combative, beautiful, determined, fancy, determined person who loves writing and wants to share my knowledge and understanding with you.