Troubleshooting

Solutions for common issues and error messages.

This page covers the most frequently encountered problems when using Floww. Each section describes the symptoms, explains the cause, and provides step-by-step solutions.

Claude CLI Issues

Floww uses the Claude CLI under the hood for LLM nodes that target Anthropic models. These issues typically surface as errors inside LLM-type nodes.

claude: command not found

Cause: The Claude CLI is not installed or is not in your system PATH.

Solution:

  1. Install the Claude CLI:
    npm install -g @anthropic-ai/claude-cli
  2. Verify it is accessible:
    claude --version
  3. If installed but not found, ensure the npm global bin directory is in your PATH. On macOS/Linux:
    export PATH="$(npm prefix -g)/bin:$PATH"
    Add this line to your ~/.bashrc or ~/.zshrc to make it permanent.

ANTHROPIC_API_KEY not set

Cause: The Claude CLI requires an API key to authenticate with the Anthropic API.

Solution:

  1. Get your API key from console.anthropic.com.
  2. Set it as an environment variable:
    export ANTHROPIC_API_KEY=sk-ant-...
  3. Alternatively, set it in Floww via Settings → Integrations → Anthropic. Floww will pass the key to the CLI automatically.
Do not hardcode API keys
Never paste your API key directly into a node's configuration. Use environment variables or Floww's built-in secrets manager instead.

Rate limit exceeded (429)

Cause: You have sent too many requests in a short period and hit the Anthropic API rate limit.

Solution:

  • Reduce the number of concurrent LLM nodes running in parallel. In workflow settings, set Max Parallel LLM Calls to a lower value (e.g., 2-3).
  • Enable Auto-Retry in the node settings. Floww will automatically back off and retry with exponential delay.
  • Check your usage tier at console.anthropic.com and request a rate limit increase if needed.

Ollama Connection

Floww can connect to a local Ollama instance for running open-weight models. These issues appear in LLM nodes configured to use Ollama.

Connection refused: localhost:11434

Cause: The Ollama server is not running.

Solution:

  1. Start the Ollama server:
    ollama serve
  2. Verify it is listening:
    curl http://localhost:11434/api/tags
    You should see a JSON response listing available models.
  3. If Ollama is running on a non-default port or remote host, update the endpoint in Settings → Integrations → Ollama.

Model not found: llama3

Cause: The model specified in the node configuration has not been pulled to your local Ollama instance.

Solution:

ollama pull llama3

Wait for the download to complete, then retry the workflow. You can list all available local models with ollama list.

Request timeout after 120s

Cause: The model is taking too long to generate a response, often because the prompt is very long or the model is still loading into memory.

Solution:

  • The first request after starting Ollama is slow because the model must be loaded into GPU/RAM. Wait for the initial load to complete.
  • Increase the timeout in the node's advanced settings: Node Settings → Timeout (default is 120 seconds).
  • If you are running on CPU only, consider using a smaller model (e.g., llama3:8b instead of llama3:70b).
  • Ensure no other application is competing for GPU memory.
Keep models warm
Run ollama run llama3 "" before executing your workflow to pre-load the model into memory. This eliminates the cold-start delay.

WebKit2GTK Issues

On Linux, Floww uses WebKit2GTK for its embedded webview. These issues are specific to Linux installations.

libwebkit2gtk-4.1.so: cannot open shared object file

Cause: The WebKit2GTK library is not installed on your system.

Solution: Install it using your distribution's package manager:

DistributionCommand
Ubuntu / Debiansudo apt install libwebkit2gtk-4.1-dev
Fedorasudo dnf install webkit2gtk4.1-devel
Arch Linuxsudo pacman -S webkit2gtk-4.1
openSUSEsudo zypper install webkit2gtk3-devel

After installation, restart Floww.

Rendering glitches or blank window

Cause: GPU acceleration issues with certain Linux drivers, particularly with Wayland compositors or older Intel/AMD drivers.

Solution:

  1. Try disabling GPU acceleration by launching Floww with:
    WEBKIT_DISABLE_COMPOSITING_MODE=1 floww
  2. If using Wayland and experiencing issues, try running under XWayland:
    GDK_BACKEND=x11 floww
  3. Update your graphics drivers to the latest available version for your distribution.
  4. Check if the issue persists with a minimal workflow (single node). If so, it is likely a system-level graphics issue rather than a Floww bug.
Reporting rendering issues
When filing a bug report for rendering glitches, include the output of floww --diagnostics, which captures your WebKit version, GPU driver info, and display server type.

Common DAG Errors

Floww workflows are directed acyclic graphs (DAGs). The engine validates the graph structure before execution. These errors indicate structural problems in your workflow.

Cycle detected in workflow

Cause: Your workflow contains a circular dependency — node A depends on node B, which depends on node A (directly or through a chain).

Solution:

  • Floww highlights the cycle in red on the canvas. Follow the highlighted edges to identify where the loop occurs.
  • Remove or redirect one of the edges to break the cycle.
  • If you need iterative behavior, use a Loop node instead of creating a manual cycle. Loop nodes handle iteration safely within the DAG model.

Missing required input on node "X"

Cause: A node has an input port that is not connected to any upstream node and has no default value.

Solution:

  • Connect an edge to the missing input port, or
  • Open the node's settings and provide a default value for the input, or
  • Mark the input as optional in the node configuration if the node supports it.

Type mismatch: expected "string", got "object"

Cause: A node is receiving data of a different type than it expects. For example, an LLM node's output (an object with metadata) is connected directly to a node expecting a plain string.

Solution:

  • Insert a Transform node between the two nodes to extract the correct field. For example, use {{input.text}} to extract just the text content from an LLM response object.
  • Check the output schema of the upstream node (visible in the node inspector) and ensure it matches what the downstream node expects.
Type inspection
Hover over any edge on the canvas to see the data type flowing through it. This makes it easy to spot type mismatches before running the workflow.

Performance Tips

Floww is built to handle large workflows, but there are practical limits depending on your hardware. Here are ways to keep things fast.

Large workflows

  • Split into sub-workflows. If a single workflow exceeds 200 nodes, consider breaking it into smaller, composable sub-workflows using the Workflow Reference node.
  • Collapse groups. Group related nodes and collapse them to reduce rendering overhead on the canvas.

Node limits

ScenarioRecommended max nodes
Smooth canvas interaction~500 nodes
Comfortable editing~200 nodes
CLI execution (no canvas)~2,000 nodes

Memory usage

  • Each node consumes approximately 1-5 MB of memory during execution, depending on its data payload.
  • LLM nodes with long context windows can use significantly more. Monitor memory usage in View → Resource Monitor.
  • If Floww is consuming too much memory, try reducing the Max Parallel Nodes setting under Settings → Execution. This limits how many nodes run concurrently.
  • Clear completed node results with Edit → Clear All Results to free memory on long-running sessions.

Getting Help

If you cannot resolve your issue with the steps above, here is how to get further assistance.

GitHub Issues

File a bug report or feature request at github.com/nicepkg/floww/issues. Please include:

  • Your Floww version (Help → About Floww)
  • Your operating system and version
  • Steps to reproduce the issue
  • Output of floww --diagnostics
  • Screenshots or screen recordings if the issue is visual

Community channels

  • Discord — join the Floww Discord server for real-time help from the community and the development team.
  • GitHub Discussions — for longer-form questions, tips, and workflow showcases, visit GitHub Discussions.
Diagnostic bundle
Run floww --diagnostics --bundle to generate a zip file containing your system info, Floww logs, and configuration (with secrets redacted). Attach this to your bug report for the fastest resolution.