Elchin's Notes

Beyond the Prompt: The B.A.S.E. Framework

February 2026

You wouldn't hire a Finance specialist and say “write me a report.” No context, no brief, no guidelines. You'd get garbage back and it would be your fault, not theirs.

That's exactly what most people do with AI.

They throw a half-baked prompt at a chatbot, get a mediocre result, and blame the tool. The tool isn't the problem. Your workflow is.

After months of embedding AI into finance operations and working with startups where AI touches almost every function, I've landed on something simple. I call it the B.A.S.E. Framework - four steps that turned AI from a novelty into actual infrastructure for how I work.


B - Brainstorm

I never start by giving AI instructions. I start by giving it my raw thinking - messy, incomplete, half-formed - and then I ask it to interview me.

“Here's what I'm thinking. Ask me five questions to figure out what I'm actually trying to do.”

This sounds small. It's not. It forces the AI to build real context instead of guessing. And half the time the questions surface things I hadn't thought through myself. It's a thinking tool before it's a doing tool.


A - Architect

This is the step most people skip entirely. They go from brainstorm straight to “write the thing” and wonder why the output feels flat.

Before any execution, I ask the AI to lay out the blueprint. If it's a financial memo, what's the structure and flow? If it's a contract review, what specific elements is it scanning for? If it's a reconciliation workflow, what are the steps and decision points?

We agree on the map before we start driving. When the architecture is right, execution almost takes care of itself.


S - Systemize

This is where it gets real, and where most people never go.

To get consistent results - not just one good output, but repeatable quality - you need to systemize the rules. I build what I call Skill Files: reference documents that capture tone, terminology, technical guardrails, and the specific standards the AI has to follow during execution.

Think of them as SOPs for your AI. In practice, a Skill File is a markdown document you load into Claude, Cursor, or whatever agentic system you're using. It's not code - it's written instructions. It might define how a finance memo should be structured, what terminology is acceptable, how uncertainty should be expressed, and when the AI must stop and escalate to a human instead of guessing.

Without Skill Files, every interaction starts from zero. With them, the AI already knows how you think, how you write, and what “good” looks like in your context. The system compounds over time.


E - Execute

Once brainstorming, architecture, and systems are in place, I stop using the chatbot. I hand everything - the blueprint, the rules, the context - to an AI agent and let it run.

Because the Systemize phase already set the guardrails, execution becomes largely hands-off. The agent knows the goal, knows the structure, knows the rules. This is where you get results that actually feel like they came from someone who understands your business.


The Point

B.A.S.E. isn't a one-time thing. It's a loop. Every time I find a better way to handle something - a sharper structure for a cash flow memo, a cleaner tone for client communications - I go back and update the Skill Files. The system gets sharper over time.

I'm running these workflows across finance and coding work right now, applying AI-driven development to real finance operations. More on the specific automations in upcoming posts.

Build the base first. The execution follows.

Deep Dive

B.A.S.E. in Practice for Finance & Founders

If you want to actually apply this in your work, these appendices show how each step operates in a real finance context.

Appendix B - Brainstorm in Practice

The Brainstorm phase is about context-loading, not prompting. Before you can get useful output, the AI needs to understand what you're actually trying to do - and most people can't articulate that cleanly upfront. That's the whole point.

The interview technique: Start by dumping your raw thinking and asking the AI to interrogate you.

“I need to prepare a monthly financial package for our board. Here's what I'm thinking - [messy notes]. Ask me 5 questions to figure out what this actually needs to accomplish.”

Good questions it will surface:

  • Who is the primary audience - investors or operators?
  • What decisions will be made from this?
  • What level of detail is expected?
  • What format have you used before?
  • What's the single most important thing they need to walk away understanding?

Those questions clarify your own thinking before you've written a single line.

Finance-specific example - cash flow memo: A founder asks AI to write a cash flow update. Without context, the output is generic. With the interview technique, the AI surfaces that the real purpose is to prepare the CEO for a lender covenant check-in, the lender cares specifically about the fixed charge coverage ratio, and there are two months of runway before the next draw. Now the output can actually address what matters.

What bad Brainstorm looks like: Pasting in a data table and saying “analyze this.” The AI has no idea what you're looking for, what decisions are downstream, or what level of uncertainty is acceptable. The output will be technically competent and practically useless.

A few opening prompts that work:

  • “I'm trying to [goal]. I haven't fully thought this through. Ask me questions.”
  • “Here's the situation: [context]. What do you need to know from me before we start?”
  • “I need to produce [deliverable] for [audience]. What assumptions are you making that I should verify?”

The Brainstorm phase takes 5-10 minutes. It saves hours of iteration.


Appendix A - Architect in Practice

Architecture is the blueprint phase. You're not building yet - you're agreeing on the structure of what you'll build.

Why this matters: The difference between a financial memo that gets read and one that gets filed is almost always structure. If the architecture is wrong - wrong sections, wrong sequence, wrong level of detail - the execution will be mediocre no matter how good the writing.

The finance architect prompt:

“Before we write anything, lay out the complete structure of this [deliverable]. What are the sections? What does each section accomplish? What data does each section need? What questions should each section answer?”

Example - variance analysis memo:

A well-architected variance memo follows a specific logic: executive summary with the bottom line, then a section for revenue variances decomposed by volume/price/mix, then a section for expense variances by category, then a section for cash position and runway, then a closing section with forward-looking adjustments. Each section has a job. When the AI knows that upfront, the output fits together.

Without architecture: the AI writes a flowing narrative that mixes revenue and expense discussion, buries the key number in paragraph three, and doesn't address the forward-looking question the CEO actually cares about.

Finance deliverables where Architecture makes the biggest difference:

  • Board packages (the sequence of slides matters as much as the content)
  • Financial models (structure your assumptions, inputs, drivers, and outputs before building)
  • Contract reviews (define the risk categories you're scanning for before reading)
  • Due diligence memos (agree on the framework before doing the analysis)
  • Close reporting (standardize the structure so the system can follow it automatically)

The key question to ask in Architect phase: “If someone reads only the headers and the first sentence of each section, do they understand the story?” If not, the architecture needs work.


Appendix S - Systemize in Practice

This is the appendix I most want you to read. Systemize is where the framework becomes infrastructure instead of a productivity hack.

What is a Skill File?

A Skill File is a written document - usually markdown - that you load into your AI system before it does a specific type of work. It contains the rules, standards, and guardrails for that task.

You're probably already doing something like this informally. Every time you say “remember, always use parentheses for negative numbers” or “don't use the word ‘leverage’ - it means something specific to us” - that's proto-Skill File content. The difference is writing it down once and loading it consistently.

Skill Files work in Claude Projects, Cursor rules (.mdc files), custom instructions, or simply as a document you paste at the top of a conversation. The format doesn't matter. The consistency does.

What goes in a Finance Skill File?

Tone and language standards:

  • Write for a CFO or board audience (senior, financially literate, time-constrained)
  • Use plain language - no jargon unless it has a specific technical meaning
  • Active voice, present tense for current state, future tense for projections
  • Negative numbers in parentheses, never with a minus sign
  • Currency formatted as $X.XM or $X,XXX.XX - be explicit about which

Structural standards:

  • Every financial memo opens with a bottom-line summary (max 3 sentences)
  • Variances always include: dollar amount, percentage, and a one-sentence root cause
  • Forecasts always include a confidence range, not just a point estimate
  • Any recommendation must include the assumption it depends on

Escalation rules (critical):

  • If the AI encounters data that doesn't reconcile, it must flag it - not smooth over it
  • If a variance exceeds [X]% and no clear driver is identified, it states “requires further investigation” rather than guessing
  • If a classification is uncertain (confidence < 80%), it returns the options and asks for input rather than choosing

Domain-specific guardrails:

  • Only use account codes from the provided chart of accounts - never invent codes
  • When citing a data source, name it specifically (e.g., “per the October AP aging” not “per the data”)
  • Distinguish between booked actuals and projections at all times

Finance Skill File examples by use case

AP Invoice Processing Skill File

Defines extraction fields (vendor, invoice number, date, PO reference, line items, tax, total), three-way match logic (invoice vs. PO vs. receiving report), tolerance thresholds (±2% on price, ±5% on quantity), and escalation triggers (confidence below 85%, new vendor, amount over authority limit).

Monthly Close Variance Skill File

Defines the five root cause categories (Timing, Volume, Price, Mix, One-time), the materiality threshold above which a variance gets a full paragraph vs. a line item, the specific format for the executive summary, and the rule that every dollar cited must reference a specific data source.

Cash Flow Commentary Skill File

Defines how to frame the 13-week outlook (base case + stressed scenario), what drivers to include (AP timing, AR collections, payroll, debt service), the rule that forecast accuracy vs. prior week must be disclosed, and the alert threshold (projected cash below minimum operating balance within 10 days triggers an explicit flag).

Contract Review Skill File

Defines the risk categories to scan for (payment terms, auto-renewal, liability caps, IP ownership, termination rights), the output format (risk summary table + recommended actions), and the rule that any clause the AI is uncertain about gets flagged rather than interpreted.

The compounding effect

When you have Skill Files, every interaction gets better. When you refine a Skill File based on a bad output, every future output benefits. When a new team member or a new AI model takes over a task, the Skill File gives it the same context a veteran would have. This is the difference between prompting and infrastructure.


Appendix E - Execute in Practice

The Execute phase is where the work actually happens - and if you've done B, A, and S properly, it should feel anticlimactic. The agent has the context, the blueprint, and the rules. It runs.

Chatbot vs. Agent

There's an important distinction worth making here. A chatbot (like a standard Claude conversation) is reactive - it responds to what you type, one turn at a time. An agent is goal-driven - you give it an objective and it works toward it autonomously, taking multiple steps, making decisions, and producing a complete output.

B.A.S.E. is designed for agents. The earlier phases are about giving the agent everything it needs to operate without constant hand-holding.

What finance agents actually run in practice

Invoice processing:

The agent reads incoming invoices (email, PDF, portal), extracts structured data, runs a three-way match against PO and receiving data, classifies each invoice as auto-processable, exception, or escalation, and routes accordingly. A well-architected and well-systemized agent handles 75-85% of invoices without human touch.

Variance analysis:

At close, the agent pulls actuals vs. budget from the ERP, identifies variances above the materiality threshold, classifies each by root cause category (using the Skill File definitions), drafts commentary in the required format, and flags anything that requires human judgment. The FP&A analyst reviews and approves - they don't start from scratch.

Cash forecasting:

The agent pulls AP aging, AR aging, payroll schedule, and debt service data, runs the 13-week forecast model, generates three scenarios (base, stressed, optimistic), compares against prior week's forecast, and flags any week where projected cash approaches the minimum balance threshold.

Month-end close orchestration:

The agent tracks close task status across reconciliations, journal entries, and variance commentary, surfaces bottlenecks, estimates completion timing, and generates the real-time close dashboard the Controller and CFO see throughout close week.

The human role in Execute

You're not removed from the process - your role changes. Instead of doing the work, you're reviewing outputs, approving exceptions, and updating the system when it gets something wrong. This is the shift from operator to overseer.

The quality of your oversight depends on the quality of your Skill Files. If the escalation rules are well-defined, the things that reach you actually need your judgment. If they're vague, you get flooded with false positives and the system defeats itself.

What Execute tells you about the earlier phases

Every time an agent produces a bad output in Execute, it's usually a failure in B, A, or S:

  • Wrong framing → Brainstorm was incomplete
  • Wrong structure → Architecture was unclear
  • Wrong standards → Skill File was missing a rule

When you fix the underlying phase, the execution improves. That's the B.A.S.E. loop.

Elchin is the founder of CFOCrew - fractional CFO services for businesses with messy financials. He writes about AI strategy and finance automation at Elchin's Notes.