More Resources
– Deep Dive into AI, LLMs, Automation, and Engineering

Under the Hood: How We Automated Trade Journaling with AI

Moving beyond generic chatbots: A technical look at how we implemented a "Write My Journal" button that analyzes equity, strategy performance, and psychology in one click.

Published: Fri, Nov 7th 2025

Trading is 90% psychology and 10% execution. The problem isn't that traders don't know this—it's that after a long day of staring at charts, the last thing anyone wants to do is write a detailed journal entry analyzing their emotional state.

We solved this by building a dedicated AI pipeline that does the heavy lifting. In our latest update, users can simply click a "Write my AI Journal" button, and the system generates a brutally honest, data-backed reflection of their trading session.

1. The Setup: Context Gathering

A generic prompt like "How was my trading?" returns generic garbage. To get a high-quality output, we have to feed the Large Language Model (LLM) high-quality, specific data. Before we even talk to the AI, our controller gathers a comprehensive snapshot of the user's day:

  • Equity & P/L Snapshot: We calculate the exact end-of-day equity and total profis/losses.
  • Strategy Breakdown: We pull data on every specific strategy executed. How many times did "Gap and Go" run? What was the win rate of the "Mean Reversion" bot today?
  • Algorithm Performance: We check the health and latency of the algorithms themselves.
  • Month-over-Month Trends: We provide historical context so the AI knows if today was an anomaly or part of a trend.

3. The Output: Structured HTML & Logic

We instruct the LLM to return the response not just as text, but as formatted HTML. This includes:

  • Color-Coded Values: We use specific CSS classes to highlight positive P/L in green and losses in red, making the journal scannable instantly.
  • Hidden Metadata: We ask the AI to generate a "hidden title" for the journal entry based on its analysis (e.g., "Reflecting on Daily Wins & Refining Strategy"). We wrap this in a hidden HTML span tag within the response.

We package all this into a structured data payload. Think of it as handing a case file to a senior analyst before asking for their opinion.

2. The "Brain": Prompt Engineering

You can't just throw data at an LLM and hope for the best. We use a highly structured prompt designed to act as a strict accountability partner. Our prompt engineering enforces four distinct sections:

  1. Daily Insights: A factual look at what happened.
  2. Reflection: A deeper dive into why it happened.
  3. Psychological Observations: We explicitly ask the AI to look for signs of FOMO, overtrading, or revenge trading based on the frequency and timing of trades.
  4. Positioning for Next Session: Actionable advice for tomorrow.

Crucially, we hardcode specific financial goals into the prompt—a $2,000 daily profit target and a $100,000 annual equity goal. This forces the AI to measure success against a concrete benchmark, not just "doing well."

"If daily profits are less than $2,000, acknowledge the shortfall and suggest steps to meet the goal in future sessions. Be brutally honest."

When our system receives the response, it doesn't just display it. It parses this HTML, extracts the hidden title using regex, and then saves the entire entry into our UserJournal database text fields. This creates a permanent, searchable record of performance driven entirely by AI analysis.

The Result

The result is a friction-less feedback loop. By automating the "drudgery" of journaling, we ensure it actually happens. Traders get the benefit of a psychological review without the cognitive load of writing it, allowing them to focus on what matters: the next trade.