qckfx logoqckfx

Getting Started

How it works

Install qckfx and it runs in your menu bar. When you launch your app in the iOS Simulator, qckfx automatically starts recording your interactions — taps, swipes, text input, and navigation — along with screenshots at each step. Network responses (HTTP and WebSocket) are also recorded and replayed during tests, so your app sees the same backend data every run. The simulator's initial disk state — UserDefaults, databases, Keychain entries, and other on-disk data — is captured at recording time and restored before each test run.

Recording

You can stop a recording manually with ⌘⇧S or by closing the app in the simulator — qckfx will prompt you to save the recording.

Important

Always start recordings from a normal app launch. Deep links are not supported. When running tests, qckfx always starts with a fresh launch of the app.

How tests run

When you run a test, qckfx builds your current code on disk, installs it on the simulator, and replays the recorded interactions. It then compares screenshots at each step to detect visual regressions.

Data storage

All recordings, test data, and results are stored locally on your machine.

Menu bar options

  • Version header
  • Last test run results
  • Stop & Save / Save
  • Run Tests
  • Check for Updates
  • Help
  • Install as MCP (Claude Code / Codex / Cursor)
  • Quit

Keyboard shortcuts

ShortcutAction
⌘⇧SSave recording
⌘⇧TRun tests

Test picker

The test picker is a floating window with scrollable test cards, each showing a GIF preview of the recorded interaction. You can select multiple tests to run at once, or right-click a test to delete it.

Test results

After a test run, the diff viewer lets you compare expected and actual screenshots with overlay or side-by-side modes. You can navigate step-by-step through the test, jump directly to failed steps, and either accept the new screenshots as the baseline or re-record the test.

Logs & network requests

Console logs and network requests are captured during test runs and exposed via MCP tools for agent-driven debugging. A built-in UI for viewing these is coming soon.