Humanizer — transparent AI editing + detector

This page shows a safe, transparent example: it edits (humanizes) text using an LLM and optionally checks the result with a detector API you provide. It will not attempt to hide that AI was used; instead it shows detector scores and keeps attribution visible.

Humanized output — not yet generated

(no output yet)

Detector result

No detector run yet. To use detection, provide a detector backend endpoint (see instructions below).

Actions / transparency


Setup & safety notes

  1. THIS TOOL IS INTENDED FOR TRANSPARENCY: it must not be used to deceive people about the origin of content. The UI shows when AI was used and shows detector results.
  2. To make this page work you need a small backend (example below) that keeps your API keys secret. Do NOT put secret API keys in browser JavaScript.
  3. The frontend calls two endpoints:
    • /api/humanize — accepts POST JSON {text, tone, length} and returns {humanized}.
    • /api/detect — accepts POST JSON {text} and returns {score:0-1, details} where higher score means "more likely AI". You can plug any detector you trust or host your own model.

Example backend (Node/Express) — create a file server.js

// Minimal example (do NOT expose your keys).
// Run: npm init -y && npm i express node-fetch

const express = require('express');
const fetch = require('node-fetch');
const app = express();
app.use(express.json());

// Read keys from env
const OPENAI_API_KEY = process.env.OPENAI_API_KEY; // for humanizing via ChatGPT
const DETECTOR_API_URL = process.env.DETECTOR_API_URL; // optional: your detector endpoint

app.post('/api/humanize', async (req, res) => {
  const { text = '', tone = 'conversational', length = 'same' } = req.body;
  if (!OPENAI_API_KEY) return res.status(500).json({ error: 'OpenAI API key not configured' });

  // Example using OpenAI Chat Completions (replace with your preferred call)
  const system = `You are a helpful editor. Rephrase the user's text to sound human, natural, and ${tone}. Keep meaning intact. If length is 'shorter' or 'longer', adjust accordingly.`;
  const userPrompt = `Edit this text for naturalness and human tone. Preserve meaning.\n\nText:\n"""${text}"""\n\nReturn only the edited text.`;

  const resp = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${OPENAI_API_KEY}` },
    body: JSON.stringify({ model: 'gpt-4o-mini', messages: [{ role:'system', content: system }, { role:'user', content: userPrompt }], max_tokens: 800 })
  });
  const data = await resp.json();
  const humanized = data?.choices?.[0]?.message?.content ?? '';
  res.json({ humanized });
});

app.post('/api/detect', async (req, res) => {
  const { text = '' } = req.body;
  if (!DETECTOR_API_URL) return res.status(500).json({ error: 'No detector configured. Set DETECTOR_API_URL.' });
  // Proxy to your detector of choice. Detector must accept {text} and return {score}
  const resp = await fetch(DETECTOR_API_URL, { method:'POST', headers:{'Content-Type':'application/json'}, body:JSON.stringify({ text }) });
  const result = await resp.json();
  res.json(result);
});

app.listen(3000, () => console.log('Server on http://localhost:3000'));

// NOTES:
// - Do NOT use client-side API keys. Keep keys on server.
// - This example is intentionally generic about detectors: do not use it to avoid transparency.

If you want, you can host a local detector (e.g. a small open-source classifier) or call a third-party detector API that you trust. The important rule: be transparent to end users when AI helped produce content.