Changelog
API updates and changes.
April 2026 — GPT Image 2 (OpenAI)
OpenAI's new SOTA image model lands on POST /v2/images/generate as gpt-image-2, alongside the existing gpt-image-1.5. Backed by openai/gpt-image-2 on Replicate and exposed through the same replicate-gpt-image provider.
- Same dual-mode contract as 1.5: omit
input_imagesfor text-to-image, pass one or more URLs to edit / compose. The model always processes inputs at high fidelity. - New
gptImage2Params.moderation(auto/low) for less restrictive content filtering. webpis the default output format.- No transparent background — keep using
gpt-image-1.5for transparent PNGs. - Pricing per output image: low 4 / medium 9 / high 23 / auto 23, multiplied by
number_of_images. See Credit costs and thegpt-image-2reference.
April 2026 — AI Photos (LoRA training)
A brand-new family of endpoints lets you train a Flux LoRA from 15-30 subject photos and then generate AI photos of that subject from any text prompt.
POST /v2/loras— multipart upload + start training. The API auto-captions every image with a vision model, packages the dataset, and runs the LoRA training. Returns 202 with the new LoRA id.GET /v2/loras— paginated list of your team's LoRAs.GET /v2/loras/:id— single lookup; status transitions throughPENDING → UPLOADING → TRAINING → READY(orFAILED). OptionalwebhookUrlfires on terminal state.DELETE /v2/loras/:id— soft-archive.POST /v2/images/generate— new modelflux-lora: passloraId+ a prompt and the API resolves the trained weights and prepends the trigger word for you. 1-4 outputs per request.
Pricing is flat: 2 credits for create (upload+caption+zip), 255 credits for training, 2 credits per image for inference. See Credit costs → AI Photos and the AI Photos overview.
April 2026 — Wan 2.7
The Wan 2.7 family ships across both Replicate and Alibaba Cloud, with a brand-new video-editing route:
POST /v2/videos/generate—wan-2.7: combined T2V + I2V at 720p/1080p, 2–15 s. New first-and-last-frame (last_frame) and clip-continuation (first_clip) anchors. Audio is auto-generated whenaudiois omitted.POST /v2/videos/generate—wan-2.7-r2v: reference-to-video. Up to 5reference_imagesand/or 5reference_videos, plusshot_typefor multi-shot narratives.POST /v2/videos/edit(NEW) —wan-2.7-videoedit: prompt-driven restyle, outfit swap, and background replacement on existing clips. The route probes input duration via ffprobe whendurationis omitted.
All three are billed per second of output. See Credit costs → Wan 2.7.
April 2026 — Editing endpoints
Four new endpoints for editing existing media, on dedicated routes alongside generation:
POST /v2/images/upscale— Topaz Image Upscale for realistic photos and Clarity Upscaler for creative AI art.POST /v2/images/background-remove— Bria RMBG 2.0 for production-quality alpha mattes and 851-labs for cheap, high-volume jobs.POST /v2/images/edit— Flux Fill Pro for masked inpainting and preset-driven outpainting (mode-switched).POST /v2/videos/upscale— Topaz Video Upscale with up to 4K / 60 fps output and per-second pricing.
v2.0.0 — March 2026
Initial release of the Apiframe v2 API.
Image generation
- Midjourney model with aspect ratio support
- Nano Banana model with image-to-image and output format options
Video generation
- Kling 2.6 Pro with 5s/10s duration, start image, audio generation, negative prompts
Music generation
- Suno with custom lyrics mode, instrumental mode, style control, and model version selection (V4 through V5)
Platform
- API key authentication with
X-API-Keyheader - Webhook notifications for job progress, completion, and failure
- Cursor-based pagination on all list endpoints
- Idempotency key support for generation requests
- Credit-based billing with automatic refunds on failures
- Team management with role-based access