M
MeshWorld.
AI Cheatsheet OpenAI Codex Developer Tools API 4 min read

OpenAI Codex & Agents Cheatsheet (2026 Edition)

Vishnu
By Vishnu

While everyone is distracted by CLI tools and IDE plugins, the real power-users are building custom agentic workflows directly against the OpenAI API. In 2026, the shift from raw text generation to the OpenAI Agents SDK and Strict JSON mode has fundamentally changed how we write AI wrappers.

Here is the no-nonsense cheatsheet for hitting the OpenAI API securely and reliably.

The Modern SDK Setup

Stop using fetch. Use the official SDK to handle retries, streaming, and tool schemas.

// npm install openai zod
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY, // Default, but good to be explicit
  maxRetries: 3, // Auto-retry on 429s or 503s
});

Enforcing Strict JSON Output

The biggest headache in 2023 was parsing broken JSON from an LLM. In 2026, you use response_format: { type: "json_schema" } with strict: true to guarantee the shape of the data. No more Regex hacks.

const response = await openai.chat.completions.create({
  model: "gpt-5.2-turbo",
  messages: [{ role: "user", content: "Extract the bug report details." }],
  response_format: {
    type: "json_schema",
    json_schema: {
      name: "bug_report",
      strict: true,
      schema: {
        type: "object",
        properties: {
          error_code: { type: "integer" },
          file_name: { type: "string" },
          is_critical: { type: "boolean" }
        },
        required: ["error_code", "file_name", "is_critical"],
        additionalProperties: false // MANDATORY for strict mode
      }
    }
  }
});

// Guaranteed to safely parse
const bugData = JSON.parse(response.choices[0].message.content);

The Scenario: You’re building an internal tool that parses raw customer support emails and inserts them into a Postgres database. If the AI hallucinates a string instead of a boolean for the is_critical column, your database throws an error and the pipeline crashes. By using strict: true and defining the schema, OpenAI literally refuses to generate an invalid JSON string. Your pipeline runs flawlessly overnight.

Function Calling (Tools)

Instead of asking the model to write code, give it tools to execute code.

const response = await openai.chat.completions.create({
  model: "gpt-5.2-turbo",
  messages: [{ role: "user", content: "What is the status of ticket PROJ-123?" }],
  tools: [
    {
      type: "function",
      function: {
        name: "query_jira_ticket",
        description: "Get the current status and assignee of a Jira ticket.",
        parameters: {
          type: "object",
          properties: {
            ticket_id: { type: "string", pattern: "^[A-Z]+-[0-9]+$" }
          },
          required: ["ticket_id"]
        }
      }
    }
  ],
  tool_choice: "auto" // Let the model decide if it needs the tool
});

// Check if the model wants to call a tool
if (response.choices[0].message.tool_calls) {
  const toolCall = response.choices[0].message.tool_calls[0];
  console.log(`Model wants to run: ${toolCall.function.name}`);
  const args = JSON.parse(toolCall.function.arguments);
  // Execute your local Jira API function here...
}

Handling Context Window Limits

Even with large windows, dumping massive log files into the prompt gets expensive and slows down the Time-To-First-Token (TTFT).

The Truncation Pattern: Always slice log files before sending them to the API. The error is usually at the bottom.

const MAX_CHARS = 10000;
// Grab the last 10k characters of a massive server log
const truncatedLog = rawLogString.slice(-MAX_CHARS);

const response = await openai.chat.completions.create({
  model: "gpt-5.2-turbo",
  messages: [
    { role: "system", content: "You are an expert devops engineer." },
    { role: "user", content: `Analyze this log tail:\n\n${truncatedLog}` }
  ]
});

System Prompts for Code Generation

Stop telling the model “Please be a helpful assistant.” Tell it exactly what to output.

The “No Yapping” Prompt:

“You are a senior TypeScript engineer. Output ONLY valid, executable TypeScript code. Do not wrap the code in markdown blocks. Do not explain your solution. Do not say ‘Here is the code’. Start immediately with the imports.”

The Scenario: You’re building an automated PR reviewer using GitHub Actions. The AI keeps responding with “Here is your reviewed code! I hope this helps!” followed by the markdown block. Your regex parser breaks every time because it expects raw code. You apply the “No Yapping” system prompt. The AI outputs pure syntax. The pipeline stops breaking.


Found this useful? Check out our comparison of AutoGPT vs. OpenAI Agents SDK.