Farmer claiming for damaged fence

Farmer claiming for damaged fence

Author
Discussion

MightyBadger

2,035 posts

51 months

Sunday 14th April
quotequote all
Forester1965 said:
Nearly. The farmer chooses whether or not to claim from the 3rd party insurer and the insurer decides whether to allow the young lad to pay from his own pocket and avoid a claim. In either case the farmer is entitled to a fence equal to the one that was damaged (not better).
Sorry, yes that's what I meant, farmers choice.

Roger Irrelevant

2,941 posts

114 months

Sunday 14th April
quotequote all
EddieSteadyGo said:
Roger Irrelevant said:
This is the strong impression I've got too. Until recently I'd heard a lot about ChatGPT and how it was going to make me redundant in a matter of years but I hadn't actually seen it in action. However I (a lawyer who works in the insurance industry funnily enough), was asked to cast my eye over some ChatGPT-generated documents that a client's marketing department had put together. They were all to do with how taking out a certain financial product might impact on the state benefits you were entitled to. The documents were quite well written, accessible to the layman, plausible enough...and almost completely wrong. I mean riddled with basic errors from start to finish. But if you'd had to ask the question you wouldn't know that. I've seen a few other examples since and while they haven't all been quite as bad as those first ones I saw there's not been a single one that I'd have been happy for a client to publish - at best they've been misleading but were more often still just plain wrong. So I reckon I'm OK for a good few years yet.

As to the fence/car/insurance question - the correct answer has been given plenty of times already on this thread so I doubt me repeating it is going to change anybody's minds.
It's funny criticising "ChatGPT" when it gave you all the right answer in this case! And a useful hint for the dinosaurs... you should differentiate between the different versions ...there are is a gulf of difference between say GPT3.5, GPT4 and a fine-tuned model. When you have tried a fine-tuned model, trained on your own company's specific documents, I think you will change your mind as to how much of a threat it is.
Eh? Whether or not it gave the right answer in this situation, I was criticising it for giving plain wrong, sometimes borderline silly information in many other situations where I've seen it used. Is that fair enough? And forgive me for not knowing precisely which version of ChatGPT was used to produce the amateurish documents I reviewed, all I can say is that it was a version that was being lauded to the skies by my client as an incredibly useful tool that was going to change everything, just as you are doing here. So forgive me again if I don't start fretting about my future employment just yet.

EddieSteadyGo

11,967 posts

204 months

Sunday 14th April
quotequote all
Roger Irrelevant said:
Eh? Whether or not it gave the right answer in this situation, I was criticising it for giving plain wrong, sometimes borderline silly information in many other situations where I've seen it used. Is that fair enough? And forgive me for not knowing precisely which version of ChatGPT was used to produce the amateurish documents I reviewed, all I can say is that it was a version that was being lauded to the skies by my client as an incredibly useful tool that was going to change everything, just as you are doing here. So forgive me again if I don't start fretting about my future employment just yet.
You don't know the version, and you haven't any experience of using it yourself. Fair enough. You don't really know much about the topic I would suggest. From my experience, GPT3.5 is almost unusable. Same as Google Gemini Pro. GPT4 is much, much better, but you still often need to do a lot of 'prompt engineering' to get something useful. And even then the output will need to be checked. Although interestingly, on advanced statistics questions, it can be surprisingly good. But take the time to train your own model, using your own company information, reports and documentation, and the quality of the results can be excellent. Any company looking to use "chatGPT" for business purposes isn't serious unless they are making that effort.

MustangGT

11,640 posts

281 months

Monday 15th April
quotequote all
EddieSteadyGo said:
Roger Irrelevant said:
Eh? Whether or not it gave the right answer in this situation, I was criticising it for giving plain wrong, sometimes borderline silly information in many other situations where I've seen it used. Is that fair enough? And forgive me for not knowing precisely which version of ChatGPT was used to produce the amateurish documents I reviewed, all I can say is that it was a version that was being lauded to the skies by my client as an incredibly useful tool that was going to change everything, just as you are doing here. So forgive me again if I don't start fretting about my future employment just yet.
You don't know the version, and you haven't any experience of using it yourself. Fair enough. You don't really know much about the topic I would suggest. From my experience, GPT3.5 is almost unusable. Same as Google Gemini Pro. GPT4 is much, much better, but you still often need to do a lot of 'prompt engineering' to get something useful. And even then the output will need to be checked. Although interestingly, on advanced statistics questions, it can be surprisingly good. But take the time to train your own model, using your own company information, reports and documentation, and the quality of the results can be excellent. Any company looking to use "chatGPT" for business purposes isn't serious unless they are making that effort.
So why are you suggesting it as a source of valid information? Quite clearly it is not.

EddieSteadyGo

11,967 posts

204 months

Monday 15th April
quotequote all
MustangGT said:
So why are you suggesting it as a source of valid information? Quite clearly it is not.
Quite clearly you haven't read the previous few pages, as it is all explained fully.

LF5335

5,971 posts

44 months

Monday 15th April
quotequote all
Forester1965 said:
Nearly. The farmer chooses whether or not to claim from the 3rd party insurer and the insurer decides whether to allow the young lad to pay from his own pocket and avoid a claim. In either case the farmer is entitled to a fence equal to the one that was damaged (not better).
It might coast the person / insurer that’s paying more to provide a fence in the same condition. Therefore a new fence may be a cheaper option.

MustangGT

11,640 posts

281 months

Monday 15th April
quotequote all
EddieSteadyGo said:
MustangGT said:
So why are you suggesting it as a source of valid information? Quite clearly it is not.
Quite clearly you haven't read the previous few pages, as it is all explained fully.
Yes, I have, including making several comments. On page 6 you brought in ChatGPT as an idea for finding the correct answer to a question and have been championing it ever since, others have pointed out it can suggest a load of nonsense instead of factually accurate statements.

Gary C

12,480 posts

180 months

Monday 15th April
quotequote all
EddieSteadyGo said:
Roger Irrelevant said:
Eh? Whether or not it gave the right answer in this situation, I was criticising it for giving plain wrong, sometimes borderline silly information in many other situations where I've seen it used. Is that fair enough? And forgive me for not knowing precisely which version of ChatGPT was used to produce the amateurish documents I reviewed, all I can say is that it was a version that was being lauded to the skies by my client as an incredibly useful tool that was going to change everything, just as you are doing here. So forgive me again if I don't start fretting about my future employment just yet.
You don't know the version, and you haven't any experience of using it yourself. Fair enough. You don't really know much about the topic I would suggest. From my experience, GPT3.5 is almost unusable. Same as Google Gemini Pro. GPT4 is much, much better, but you still often need to do a lot of 'prompt engineering' to get something useful. And even then the output will need to be checked. Although interestingly, on advanced statistics questions, it can be surprisingly good. But take the time to train your own model, using your own company information, reports and documentation, and the quality of the results can be excellent. Any company looking to use "chatGPT" for business purposes isn't serious unless they are making that effort.
Do you actually know how LLM's work ?

EddieSteadyGo

11,967 posts

204 months

Monday 15th April
quotequote all
MustangGT said:
Yes, I have, including making several comments. On page 6 you brought in ChatGPT as an idea for finding the correct answer to a question and have been championing it ever since, others have pointed out it can suggest a load of nonsense instead of factually accurate statements.
As I said, it answered this question correctly. And I also checked its answer by providing links to the relevant decisions by the Financial Ombudsman, which is more than you have done.

Forester1965

1,518 posts

4 months

Monday 15th April
quotequote all
As was explained earlier, ombudsman decisions are not particularly helpful because they're decided on the facts and circumstances *of a particular case*. They don't follow any particular rules beyond deciding what's "fair and equitable" (as in they're not bound by the courts or the contract) and don't bind any other case.

TwigtheWonderkid

43,400 posts

151 months

Thursday 18th April
quotequote all
Killboy said:
TwigtheWonderkid said:
Killboy said:
TwigtheWonderkid said:
I've already said that's unacceptable, and the farmer is entitled to have it repaired by a professional contractor of his choice.
And if the kid won't pay? Poor farmer.
Then he goes direct to the kids insurance! What's the issue.
But what if the kid insists on them not paying?
The kid can tell his insurer he wishes to settle privately. The kid can tell his insurer he wants them to settle. But the kid can't tell his insurer he doesn't want to pay and they shouldn't pay either. As said, rights of subrogation.