Claude AI and trading/investing....
Claude AI and trading/investing....
Author
Discussion

Luke.

Original Poster:

11,768 posts

272 months

I signed up to Claude Pro yesterday and just been generally having a play around. Talking crap re cars, countryside routes to Le Mans etc. But than started discussing investments.

To be honest, on first impressions I'm blown away by what it was coming back with re research, due diligence etc and ideas of where to invest and how much. Though I've no idea if this is all crap la, la land or could be a useful tool.

Anyone had play with Claude and AI at all when it comes to investing?

Simpo Two

90,989 posts

287 months

Nope. But if I was interested I might try several, give them all the same information and see if they came up with the same answer.

Crumpet

4,972 posts

202 months

I asked ChatGPT a few questions today and the bulk of the answers it gave were wrong! The frustrating thing is it answers with absolute certainty. It can’t be trusted to answer basic questions so I don’t think I’d be putting much faith in its stock tips!

ooid

5,960 posts

122 months

Yesterday (11:29)
quotequote all
Luke. said:
Anyone had play with Claude and AI at all when it comes to investing?
If you have reliable and properly cleaned data, and if you do know quite specific questions to explore, than yes it is pretty good. I have tested a few strategies, the analysis speed and observations are quite impressive.

locoloco

56 posts

153 months

Yesterday (12:35)
quotequote all
mainly more for reseacrh.

so i might ask for an assesment of a company, what it's moat is, and then ask subsequent Q's to try to get a feel of whether there may be other contenders which would likely benefit from the surge/interest in that sector/product.

Or have used them to backtest simple strategies - ie/ historically if i sold calls 12% out of the money on 'x' stock, how many times would my position have been challenged/how many times has 'x' stock risen by 'y'% in a week and did that occur the following week too, how soon was any reversion and by what %.

Only thing i've asked claude/grok to do beyond that is to weight a poss portfolio with %'s of stocks with various attributes, and then ask for alternatives or drill into 'why'.

lizardbrain

3,686 posts

59 months

Yesterday (13:01)
quotequote all
I'm finding it quite useful for things like rebalancing and assessing risk. and for translating reports.

However I don't trust a chat bot to do actual backtests or forecasts. Often, it will guess the likely answer based on context, it won't literally do the calculation or look up the actual numbers. And the randomness of the guess will depend on the temperature setting that your chatbot happens to have that day, or that session.

if you keep all the calculations to a spreadsheet and feed that spreadsheet into the chat, I find the analysis can be very useful


Edited by lizardbrain on Saturday 14th February 13:19

JoshSm

3,190 posts

59 months

Yesterday (13:13)
quotequote all
Guess it depends what data you're expecting it to use and what result you expect to get.

Ultimately it's a pattern generator, it'll create a convincing looking analysis but that's entirely different from it being accurate. It works nothing like the more traditional automation used for trading.

Getting the right answer instead of the right looking answer can be very hard work, especially as context can lead an AI down some very stupid rabbit holes that it can be hard to get it out of.

And that's assuming you blindly trust a model that worked one week to do the same the next, which it quite possibly won't.

LivLL

12,066 posts

219 months

Yesterday (13:36)
quotequote all
Crumpet said:
I asked ChatGPT a few questions today and the bulk of the answers it gave were wrong! The frustrating thing is it answers with absolute certainty. It can t be trusted to answer basic questions so I don t think I d be putting much faith in its stock tips!
I've seen this, if you then tell it it's wrong and provide the correct answers it uses them for a short period before reverting to a different but wrong answer again. It's so flawed, you can't trust them for anything meaningful yet.

locoloco

56 posts

153 months

Yesterday (14:17)
quotequote all
lizardbrain said:
I'm finding it quite useful for things like rebalancing and assessing risk. and for translating reports.

However I don't trust a chat bot to do actual backtests or forecasts. Often, it will guess the likely answer based on context, it won't literally do the calculation or look up the actual numbers. And the randomness of the guess will depend on the temperature setting that your chatbot happens to have that day, or that session.

if you keep all the calculations to a spreadsheet and feed that spreadsheet into the chat, I find the analysis can be very useful


Edited by lizardbrain on Saturday 14th February 13:19
obvs depends on exactly what is being asked & how;
if i ask for a table going back 12mths showing every open/close price of an asset, highlighting which weeks exceeded 'x' and then asking for a test against that of doing 'y' - i've found AI 100% accurate; The issues tend to be when questions are vague or too many assumptions could be made by it.

lizardbrain

3,686 posts

59 months

Yesterday (14:59)
quotequote all
I just tried a test asking for the specific days where GOOG close was up more than 3% on prior day's close, over last 12 months. This is claude Pro, (not max). sonnet 4.5

It's definitely got a lot better since the last time I used it, I like how it explains what it's trying to do as it works, rather than blindly spitting out an answer.

However, it still failed. It spent a few minutes trying to download the data from various sites, then gave up and manually scraped the data from one site and overconfidently stated that it had 12 months' data. However looking at the table that it created, it seems to only have about four months' data, which wasn't disclosed in the answer. So that's nothing close to 100% based on just one test.

However, this all seems easily fixable, and an improvement on the last time I used it where it would pretend it had this data somehow memorised, and spit out a number that sounded plausible. without really explaining how it arrived at the answer. This time it looked up the actual data, which took a lot longer

My original point stands I think that the strength of these tools is in the analysis not the calculation and remain very powerful tools if you can have a separate step to provide it with more robust data. It's especially powerful if you're happy with 95% confidence and contextual analysisl. There are a high number of tasks where this level of confidence is perfectly valid.


Edited by lizardbrain on Saturday 14th February 15:17

Scarfie

218 posts

44 months

Yesterday (17:40)
quotequote all
I use Claude code for software development, it genuinely is good. BUT i understand what AI is and how it works, many many people do not.

It is a language model, all it does is guess through predictions what the next character or word should be, that is all. Yes it can use other programs, scripts to throw results out, but it NEVER evaluates, calculates or gives FACTUAL results to you.

Do not use it as a source for problem solving or solution finding.

Do use it for taking your repo, your design patterns and get it to program for you based on what you teach it, its excellent. I have saved days of time developing this way,

When i have tried to cut corners and get it to "create" something, then i find i lose a lot more time undoing and unwinding.

Its a great tool in the right hands, in the wrong hands it can be a nightmare. If you dont agree i would suggest you probably shouldnt be using it as you dont understand it.

Also over using it DOES make you lazy, complacent and a bit stupid. So I really try to keep myself in check.

lizardbrain

3,686 posts

59 months

Yesterday (18:06)
quotequote all
I'm guessing the lines are increasingly blurred with tools like cowork, which reads and writes to csv files? Or the chrome extension which can scrape actual data. It feels like it's not going to be too long before LLMs can interact with robust data sources in a way that contains the randomness, though i agree it's not happening in feb 2026, thouhg march or april is possibily

Scarfie

218 posts

44 months

Yesterday (18:45)
quotequote all
Oh the data they have had for the last 30 or possibly 40 years is extensive and will grow. Unfortunately the more people use AI to "get" or "gain" "information" means all that happens is you feed bad information back in. And as i said, it doesnt evaluate or calculate, it just guesses the best predicted outcome. Which for coding it works bloody well, because it is so linear. The language rules are rigid.

So we are in a race to the bottom unfortunately where people will revert to an AI answer as a fact. And act on it. Very bad news.