we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent
This already is total BS. If you know how such language models work you'd never take their responses at face value, even though it's tempting because they spout their BS so confidently. Always double-check their responses before applying their "knowledge" in the real world.
The question they try to answer is flawed, no wonder the result is just as bad.
Before anyone starts crying about my language models opposition: I'm not opposed to LMs or ChatGPT. In fact, I'm running LMs locally because they help me be more productive and I'm a paying ChatGPT customer.
https://archive.ph/M5uol