Banks Are Right to Clamp Down on Office ChatGPT
Whatever the merits for their business, it’s probably the compliance cops’ worries carrying the day.
ChatGPT. OK, it's cool, but what is it for? This is the question I'd be asking if I were a banking executive. Oh, and of course: What are the risks of using it?
There is huge excitement about this bright new toy, but what it mainly does is produce content on demand that is distilled from information picked up off the internet. To my mind, what makes it smart is its ability to produce language that sounds like a convincing voice, not the substance of what it is telling you.
So why are banks banning it inside their businesses? The answer is in what bankers might use it for. Bank of America Corp. and Goldman Sachs Group Inc. have joined JPMorgan Chase & Co. in telling staff they mustn't use it for business purposes.
Those business purposes could be to generate a draft of a pitch document or research report, just as people have tried it out writing parts of academic papers, press releases or even entire novels. Maybe senior bankers think their juniors will get lazy. More likely, the compliance departments are fretting about the risks involved, especially after being fined by regulators for bankers' use of WhatsApp.
ChatGPT and other large language models have been shown to make mistakes and get things wrong, or even hallucinate and make up non-existent fields of scientific enquiry, for example. If a sell-side analysts' research report turned out to have plausible but entirely fantastic sectoral developments threatening or benefiting a listed company, I assume that would look bad.
Also, as ChatGPT goes around pulling information from the web, there's a danger that it might end up straight plagiarising someone else's work. Again, if you're a bank, or any information-centered business where reputation and trust matters, this would not be good.
ChatGPT could also be used to write computer code. Banks would be mad to let it anywhere near their code, however. There would be hurdles anyway for the banks that still have large parts of their systems built on proprietary coding languages that ChatGPT would need to learn. But beyond that, bank regulators and customers have an extremely low tolerance for failure in banking systems – trades need to be confirmed and settled, payments need to be made and companies and people need access to their cash. Banks have to be pretty sure that anything going on their computers is reliable and that they understand exactly what it is doing.
But back to the content question: A major selling point for traders, investment bankers and research analysts is their own intellectual content. Companies pay them big bucks to advise on takeovers or raise capital because they know things about rival firms and appetites for risk in markets. For similar reasons, investors pay banks to buy and sell assets, or to help construct bespoke derivatives trades with a plethora of payoffs. Would you want to pay so much if you thought a web-crawling robot was writing the pitch for your business?
I'm being somewhat facetious, or course. But the presentation of content is just that: it's the presentation, it isn't the know-how, the skill, or the intellectual capital that is behind “the content.” Banks, like most companies, produce an awful lot of spam: Endless, self-promoting marketing materials, releases and brochures to convince people that their services are good — I should probably say “exceptional!”
We should poke fun at most of this. But at the same time, for any company that is fundamentally useful, there is real intellectual capability behind this voluminous noise. ChatGPT might be able to produce a beautiful and entirely convincing brochure about new homes, but I'm fairly sure it couldn't also build, decorate and furnish them. At least not yet.