Uncategorized

Self-Hosting Dify: Installation and Setup Guide

Introduction

Dify is an open-source platform for building AI applications without needing to write a lot of code. It’s often described as an “LLMOps platform” – meaning it helps manage and deploy Large Language Model operations – but you don’t have to be an expert to use it. With Dify, you can visually create AI workflows, connect them to powerful language models (like GPT-4 or other LLMs), and integrate retrieval of your own data (a technique known as RAG, Retrieval-Augmented Generation). The best part is that Dify can be self-hosted on your own server or computer, giving you complete control over your data and customizations. In this guide, we’ll explain what Dify is, why you might want to self-host it, and walk you through the step-by-step process of installing and setting up Dify on your own. This article is beginner-friendly, so don’t worry if you’re new to concepts like Docker or server hosting – we’ll cover the basics as we go.

What is Dify, and Why Self-Host It?

Dify (short for “Do It For You”) is a platform that enables you to create generative AI applications quickly. Think of it as a toolbox and visual builder for AI apps: it provides a drag-and-drop interface to design prompt workflows, an “AI agent” framework to automate decisions, integration with various AI models (from OpenAI’s GPT series to open-source models), and features like data storage and monitoring. In simple terms, Dify lets you build things like chatbots, AI assistants, or any application that uses large language models – all through a friendly web interface.

You might wonder why self-host Dify instead of using a cloud service. Here are a few good reasons:

  • Data Privacy & Control: Self-hosting means all data and interactions stay on your server, not a third-party cloud. If you’re working with sensitive information or just want peace of mind, running Dify locally or on your own server ensures you have full control over the data.
  • Customization: When you host it yourself, you can configure Dify however you need – connect it to custom machine learning models, adjust its code, or integrate with your own databases and tools. You’re not limited by a SaaS plan’s restrictions.
  • Cost Efficiency: Dify’s cloud version might have usage costs or limits. By hosting on your own hardware or an affordable cloud VPS, you could potentially save money, especially for heavy usage. You’ll still need to pay for the underlying infrastructure (and any API calls to AI models like OpenAI), but you won’t be paying a premium for the platform itself since Dify is open-source.
  • Learning and Ownership: For enthusiasts and developers, setting up your own instance of Dify is a great learning experience. You get to understand the workings of an AI application stack – from databases to background workers – and you truly own the platform. No one can shut it down on you or change the terms of service.

Of course, self-hosting comes with the responsibility of maintenance (updates, security, etc.). But Dify’s community and documentation are very helpful, and this guide will get you started on the right foot. Next, let’s go over what you need before installing Dify.

Prerequisites and Requirements

Before you start the installation, ensure you have the following ready:

  • A Suitable Server or PC: You can install Dify on a local computer (for testing/development) or on a cloud server for remote access. Dify isn’t too demanding, but for a smooth experience you should have at least a 2-core CPU and 4 GB of RAM available for it. More is always better if you plan to handle bigger workloads. If you use Docker Desktop on Windows/Mac, allocate enough resources (e.g. 2 CPUs and ~8 GB RAM in Docker settings) so the containers can run properly.
  • Operating System: Dify works on Linux, macOS, or Windows (the server components actually run in Linux containers). For production use, Linux (Ubuntu, Debian, etc.) is recommended. Windows users can use the Docker-based installation via WSL2. Mac users can use Docker Desktop as well.
  • Docker and Docker Compose: The easiest way to self-host Dify is using Docker containers. You’ll need to have Docker installed on your system, along with Docker Compose (which usually comes integrated with recent Docker versions). If you don’t have these, download Docker Desktop (for Windows/macOS) or install Docker Engine on Linux. Make sure Docker is running correctly. You can verify by running a simple command like docker --version in a terminal. Likewise, check docker compose version to confirm Docker Compose is available.
  • Internet Connection: The installation will need to download Docker images (Dify’s components and databases) from the internet, so ensure your machine can access the web. Also, once running, Dify may need internet access to call external AI model APIs (like OpenAI) unless you use only completely offline models.
  • Optional – Domain Name and SSL: If you are deploying on a server and want to access Dify via a nice URL (like https://ai.yourdomain.com), you should have a domain name and maybe prepare an SSL certificate. This is optional and can be configured after the basic setup. By default, Dify will run on port 80 (HTTP) on your server’s IP. For local testing, you can just use http://localhost.
  • Optional – API Keys for AI Models: Out of the box, Dify supports multiple AI providers (OpenAI, Azure OpenAI, Anthropic Claude, etc.) as well as open-source models. While not needed for installation, you will eventually need API keys or credentials for whichever AI model you plan to use (for example, an OpenAI API key if you want to use GPT-4 through Dify). Having those ready will help when configuring Dify after installation.
  • Basic Command-Line Knowledge: We will be running a few commands in a terminal (Command Prompt, PowerShell, or bash depending on your OS) to set up Dify. If you’re not very familiar with command-line operations, don’t worry – this guide will provide the exact commands to run. Take it slow and feel free to copy-paste the commands. If you’re completely new to Docker or servers, you might consider taking a beginner-friendly course (for example, a Docker introduction course on Udemy or a tutorial on Linux basics) to build your confidence. It’s not mandatory, but it can be helpful.

Tip: If you don’t have a server yet and are looking for an easy option, you can rent a small VPS from cloud providers like DigitalOcean or Linode at a low cost. These services let you create an Ubuntu server in minutes. Make sure to choose a plan with at least 4 GB RAM. Many providers offer promo credits for new users which can cover your first couple of months. Setting up on such a VPS via SSH will be very similar to the steps we describe.

Now that you have everything ready, let’s outline the installation methods. We’ll primarily focus on the Docker Compose method, which is the simplest for most users. We’ll also briefly touch on installing from source code (a more advanced method for developers who might want to tweak Dify’s code).

Installation Methods Overview

Dify can be deployed in a couple of ways:

  • Method 1: Docker Compose Deployment (Recommended) – Dify’s developers provide a Docker Compose configuration that bundles everything needed (the Dify web app, API server, background worker, database, vector store, etc.) into Docker containers. Using this method, you’ll just clone the Dify repository and run a few Docker commands. It’s the quickest way to get up and running, and it works cross-platform. We’ll cover this in detail.
  • Method 2: Running from Source (Manual Setup) – Alternatively, you can run Dify’s components directly on your host by installing the required programming languages and libraries (Python, Node.js, etc.), suitable for development or customization. This gives you more flexibility to modify the code or update components individually. However, it’s more complex and not necessary if you just want to use Dify. We’ll give an overview of how to do this, but if you’re a beginner, you might skip this and stick to Docker Compose.

There are also other deployment options (for example, an aaPanel installation method, or using Kubernetes and tools like Pigsty for a more enterprise setup). Those are beyond the scope of this article. For most users, Docker Compose will do the job nicely. So let’s dive into the Docker Compose installation step-by-step.

Installing Dify with Docker Compose (Step-by-Step)

Method 1: Docker Compose Deployment
This section will guide you through installing Dify using Docker Compose. This is the easiest and most straightforward way to self-host Dify.

  1. Install Docker (and Compose) if not already installed: Ensure that Docker is running on your system. If you haven’t installed Docker:
    • Windows/macOS: Download and install Docker Desktop from the official Docker website. During installation, Docker Desktop will also set up Docker Compose. After installation, launch Docker Desktop and let it run in the background.
    • Linux: Install Docker Engine using your package manager (for example, on Ubuntu: sudo apt-get update then sudo apt-get install -y docker.io). Also install Docker Compose plugin if needed (on many newer distributions, Compose v2 is included; if not, you might install it via sudo apt-get install docker-compose-plugin or by downloading the binary). Make sure your user has permission to run Docker, or use sudo for Docker commands.
    • After installing, verify by running docker --version and docker compose version. You should see version info instead of errors. If you’re new to Docker, it might be helpful to run a test container (for instance, docker run hello-world) to confirm everything is working.
  2. Download the Dify codebase: The Dify team provides the Docker setup as part of their code repository on GitHub. You’ll need to download (clone) this repository. If you have Git installed, you can use the following command in a terminal: git clone https://github.com/langgenius/dify.git This will create a folder called “dify” with all the necessary files. Tip: If you don’t have Git, you can alternatively download the ZIP file from the Dify GitHub page and extract it, but using git clone is recommended to easily get updates later. The repository isn’t huge, but it will pull in multiple files including a docker/ directory that contains the deployment configuration.
  3. Configure the environment variables: Navigate into the folder containing Dify’s Docker setup: cd dify/docker Inside, you’ll find a file named .env.example. This file contains default configuration values (such as database passwords, ports, etc.) for the Docker deployment. You need to create your own .env file based on this template. The simplest way is to copy it: cp .env.example .env This creates a new file .env with the same contents. You can open .env in a text editor to review or modify settings. For a basic install, the defaults should be fine. A few key things in this file:
    • Ports: By default, the web interface will run on port 80 and 443 (HTTP/HTTPS). If you already have something on port 80 (like a web server), you might want to change the 80:80 mapping to a different port (for example 8080:80) in the docker-compose.yml or in the env file for NGINX_LISTEN_PORT. Otherwise, ensure nothing else is using port 80 on your machine while starting Dify.
    • Passwords/Keys: The env file sets default passwords for the Postgres database, etc. These are randomly generated defaults. You can leave them as is for testing, but for a production deployment consider changing them to your own secure passwords before you launch.
    • Volumes: The Docker Compose setup will create some local folders (volumes) to store data (like the database files and vector index). By default, these will be under dify/docker/volumes. Make sure you have enough disk space and that it’s okay to store data there. Keeping these volumes is important so your data persists across restarts.
    Double-check if you need to adjust any domain or URL settings. By default, the configuration assumes you’ll access Dify via localhost or the server’s IP. If you plan to use a custom domain from the start, you might need to set CONSOLE_WEB_URL, APP_WEB_URL, etc., in the env file. For a first-time setup, you can skip that and use the IP/localhost, then configure domain later.
  4. Start Dify using Docker Compose: Now for the exciting part – launching the Dify application stack. Docker Compose will read the docker-compose.yml file and spin up multiple containers for Dify. Make sure you’re still in the dify/docker directory, then run: docker compose up -d (Note: If your Docker uses the older Compose v1, the command might be docker-compose up -d with a hyphen. But on modern setups, docker compose works.) The first time you run this, Docker will download several container images from the internet. These include:
    • Dify’s backend (API) image and frontend (web) image,
    • Dify’s worker image (for background tasks),
    • PostgreSQL (the database used to store app data),
    • Weaviate (a vector database for knowledge base and embeddings, enabling the RAG functionality),
    • Redis (an in-memory cache and message broker, used for fast operations and as a task queue for Celery),
    • Nginx (a web server/reverse proxy that fronts the Dify web interface and API on port 80/443),
    • and a couple of utility containers like SSRF proxy (for security when fetching external URLs) and Sandbox (which might be used for executing untrusted code safely, a part of Dify’s plugin/extension system).
    This initial download may take a few minutes depending on your internet speed. You’ll see logs in the terminal as each image is pulled and each container is started. After it finishes, Docker Compose will detach (thanks to the -d flag) and leave everything running in the background. You can check that all containers are up by running: docker compose ps This will list the containers and their status. You should see something like Up (healthy) or Up for each service (api, web, worker, db, etc.). If any container shows an error (Exited or restarting), you may need to troubleshoot by checking its logs (docker compose logs <serviceName>). For a successful start, you’ll typically have 9 or 10 containers running.
  5. Perform initial setup (create admin account): With Dify’s services now running, you need to do a one-time initialization step in your browser. This involves setting up the administrator account for Dify. Open your web browser and go to the special install page:
    • If you’re running locally on the same machine, navigate to http://localhost/install.
    • If you’re running this on a remote server, use the server’s IP address (or domain if configured) followed by /install – for example: http://123.45.67.89/install (using your server’s public IP).
    You should see an “Initialize Administrator” page or a setup wizard. It will likely ask you to create the admin user by providing an email and password. Choose an email (it could be fake or just “admin@yourdomain.com”) and a strong password that you’ll remember. This account will be the superuser that can manage the Dify instance (add other users, change settings, etc.). Submit the form to create the admin. Once completed, the setup page should redirect you to the main login or homepage of Dify. You can now log in with the admin credentials you just set (if it didn’t log you in automatically). 🎉 Congratulations – your Dify instance is installed and initialized!
  6. Access the Dify web interface: After initialization, the Dify web application is accessible at the base URL:
    • Locally: http://localhost/ (If you set up on your own PC or are accessing via something like http://127.0.0.1.)
    • On a server: http://your_server_ip/ (or http://yourdomain/ if you pointed a domain and configured DNS).
    You should see Dify’s interface in your browser. This typically includes a dashboard or homepage showing options like creating a new AI app or workflow. From here, you can start exploring Dify’s features: create a project, configure model providers (e.g., plug in your OpenAI API key under settings), add a knowledge base or dataset for RAG if you plan to use one, and then build an AI app via the visual workflow editor. Take a moment to familiarize yourself with the UI. For instance, there might be a menu for “Applications” where you can design chatbots or agents, a “Knowledge” section for uploading documents to use with RAG, and an “Settings” area for managing your Dify instance (organization settings, user management, provider API keys, etc.). Because you are the admin, you’ll have access to all settings. Regular users (if you invite any) might have more limited access depending on roles.
  7. Verify everything is working: It’s a good idea to do a quick functionality test. For example, you can try to create a simple AI app:
    • Use the interface to create a new application (perhaps a chat assistant).
    • If prompted, select a model provider and enter an API key (like OpenAI). Dify supports adding providers in the settings; you might need to do that first – e.g., add OpenAI with your API key so it can use GPT-3.5 or GPT-4.
    • Use one of the template workflows or create a blank one and add a prompt node (the UI is intuitive: drag nodes, set up a conversation prompt, etc.).
    • Try interacting with your app (there’s usually a test chat interface once you publish the app or in preview mode).
    If the AI responds correctly (assuming you set up a valid model key and internet access), then Dify is functioning as expected. If you run into any errors (like the AI not responding), double-check that your model API credentials are set and that the Dify worker container is running (the worker handles tasks like calling the model API).

That’s it! You have successfully installed Dify using Docker. You can keep this Dify instance running as long as you need. The Docker containers will continue to run in the background. If you need to stop Dify, you can go into the dify/docker directory and run docker compose down (this will stop and remove the containers, but your data in the volumes will persist). To start it again later, use docker compose up -d once more.

Additional Docker Deployment Tips:

  • If you want Dify to start on boot (for a server environment), you might consider using Docker’s restart policies or an init script. For instance, you could modify the compose file or use docker compose up -d --restart unless-stopped (though Compose v2 uses the docker compose ls to manage auto-start, so another approach is enabling the Docker service to restart containers). In a pinch, you could put the docker compose up -d command in a system startup script. Ensure that Docker itself is set to start on boot too.
  • Logs and Troubleshooting: If something isn’t working right (for example, the web UI doesn’t load, or some function fails), check the logs. Use docker compose logs -f to tail all logs, or specify a service like docker compose logs api or docker compose logs web. This can show error messages that point to missing configuration or other issues. Common fixes include adjusting environment variables (like allowed host URLs if you changed the domain) or ensuring all containers are healthy.
  • Updating Dify: The Dify team frequently updates the platform with new features and fixes. To update your self-hosted instance to the latest version, you can pull the latest Docker images. One convenient way: go into the dify directory and run: git pull # fetch the latest code (including docker-compose.yml changes) docker compose down docker compose pull # download the latest images for each service docker compose up -d # restart with the new images This will bring your deployment up to date. Keep an eye on the official Dify release notes for any breaking changes or new environment variables you might need to add in your .env file when upgrading. (If the .env.example file in the repo changed, compare it to your .env and add any new required settings.)
  • Backups: If you are using Dify in production or for something important, set up a backup strategy. At minimum, back up the database and any file uploads. In this Docker setup, key data is stored in the Docker volumes under dify/docker/volumes. You could periodically make a copy of that directory (while the containers are stopped) to have a snapshot of your data. This includes the Postgres database data, Weaviate indexes, etc. There may also be an option within Dify to export certain data (for example, exporting an app configuration), but external backups are always a good idea.

Now that we have the Docker method covered, let’s briefly outline how to run Dify from source code. If you’re not interested in this advanced method, you can skip to the next section. But it’s useful to know how Dify runs under the hood and how you might set it up without containers.

Installing Dify from Source (Advanced Method)

Method 2: Local Source Code Setup
This method is for users who want to run Dify without Docker, directly on the host machine. It involves more steps – installing dependencies like Python and Node.js – and is generally recommended only if you plan to develop or contribute to Dify’s code, or need a setup that Docker can’t easily provide. Beginners can read through this to understand how Dify’s architecture works, but don’t worry if it feels complex.

Overview of Dify’s components: When running from source, you will be running three main parts of Dify:

  • The backend (API server) – a Python Flask application that provides API endpoints and handles core logic.
  • The worker – a background task runner (uses Celery with Redis and Postgres) to do asynchronous tasks such as processing knowledge base documents or sending emails.
  • The frontend (web client) – a Next.js (Node.js) application that provides the user interface in the browser.

Additionally, you need the middleware services which include Postgres, Redis, vector database, etc. In the source setup, it’s common to still use Docker for these supportive services (so you don’t have to install Postgres or Redis manually). The Dify team provides a docker-compose.middleware.yaml to quickly bring up these dependencies.

In summary, the source setup often is “hybrid”: use Docker to run Postgres, Redis, Weaviate, etc., but run the Dify app code (API, worker, web) on the host for development.

Here are the steps to install from source:

  1. Prepare Docker for the middleware services: Even though we are not running the main Dify app in Docker, we will use Docker to run the required databases and services. Ensure Docker is installed and running (as in step 1 of the previous section). If you followed Method 1 and already have the Dify repository, you’re good to go. If not, clone the Dify GitHub repository to get the code on your machine: git clone https://github.com/langgenius/dify.git Navigate into the repository folder: cd dify
  2. Start the middleware services using Docker Compose: Inside the dify directory, there is a docker/ folder which contains docker-compose.middleware.yaml and a middleware.env.example. These define containers for Postgres, Redis, Weaviate, etc. We will launch those: cd docker cp middleware.env.example middleware.env docker compose -f docker-compose.middleware.yaml up -d This will spin up the database, vector store, and other background services Dify needs. It’s similar to the previous method, except we are not starting the Dify application containers. After running this, check docker compose ps (with the -f docker-compose.middleware.yaml flag or being in the same directory) to see that db, redis, weaviate, etc., are running. They will wait idle for connections. Note: The middleware.env file can be used to adjust things like passwords and storage locations. By default, Postgres will be on port 5432, Redis on 6379, Weaviate on 8080 (internal), etc. The defaults usually work, but ensure none of those ports conflict with existing services on your host.
  3. Set up the Python backend (API service): Dify’s backend is written in Python (likely requiring Python 3.10+; at the time of writing, Python 3.12 is recommended by the docs). Make sure you have a compatible Python version installed. You might use a tool like pyenv to install and manage the correct Python version (for example, pyenv install 3.12 && pyenv global 3.12). Otherwise, download and install Python 3.12 from python.org if needed. Once the correct Python is ready, we’ll set up a virtual environment for the Dify API:
    • Navigate to the api directory of the Dify project: cd ../api # assuming you were in dify/docker, this goes to dify/api (Or open a new terminal and go to dify/api).
    • Create a copy of the example environment file for the API: cp .env.example .env Open the new .env file in an editor. We need to adjust a couple of things:
      • The SECRET_KEY: This is a secret token used by Flask for security (sessions, etc.). We should set it to a random value. If you have OpenSSL installed, you can generate one via command or use an online generator for a random string. For example, on Linux/Mac: openssl rand -base64 42 This prints a random string. Copy that and paste it as the value for SECRET_KEY= in the .env file (replace the placeholder). Save the file.
      • The database and Redis connection settings in this file should match what the Docker middleware is running. By default, the .env.example should already point to localhost for Postgres and Redis with the default credentials (which are set in the middleware.env we used). For instance, DB_HOST=localhost, DB_PASSWORD=postgresPassword (check that matches middleware.env’s POSTGRES_PASSWORD), REDIS_HOST=localhost, etc. If you changed passwords in middleware.env, update them here too.
      • Make sure OPENAI_API_KEY or others are blank for now (you can set them later via the UI; keeping them in .env is optional).
    • Install Python dependencies: Dify uses a package manager called uv (which is a wrapper around pip and other tools, provided by the Astral toolset). If you don’t have uv, you may install it (pip install uvicorn is different; “uv” here might refer to a utility – but in the official guide they use uv to install requirements). Let’s assume uv is included in the repository’s setup. In the api directory, run: pip install -r requirements.txt (If the project uses Poetry or another tool, use that accordingly. The uv sync command mentioned in docs will essentially install from a lockfile. For simplicity, pip install -r requirements.txt should work to pull all needed Python libraries.)
      This will install Flask, Celery, SQLAlchemy, and all other dependencies required for Dify’s backend. It might take a couple of minutes. Make sure it finishes without errors.
    • Apply database migrations: Dify’s backend uses a SQL database (PostgreSQL) and has a schema that needs to be set up. There should be Flask migration scripts included. Run the migration command: flask db upgrade (If flask isn’t found, you might need to run it via the uv tool or ensure the Flask CLI is available. Alternatively: python -m flask db upgrade.)
      This will create the necessary tables in the Postgres database (which is running in the Docker middleware). After this, the database is ready to use.
    • Start the API server: Now, launch the Flask development server for the API: flask run --host 0.0.0.0 --port 5001 --debug This will start the API on port 5001. We use 0.0.0.0 so it’s accessible to the outside (and to the other components like the web frontend). The debug flag is optional (it enables auto-reloading on code changes and debug output). You should see output indicating the server is running at http://0.0.0.0:5001 (which means local port 5001). Keep this terminal running – it’s your backend service.
  4. Start the Worker service: Open a new terminal window or tab (because the first one is occupied by the running API server). We need to start the Celery worker that processes background jobs. Navigate to the dify/api directory in this new terminal (activate the same Python environment if using virtualenv). Use the Celery command provided by the Dify project to start the worker: celery -A app.celery worker -P gevent -c 1 --loglevel=INFO -Q dataset,generation,mail,ops_trace This command tells Celery to start one worker (-c 1) using gevent (for concurrency) and listen on specific queues (dataset, generation, etc.). If you are on Windows, the gevent pool might not work – in that case, the docs suggest using solo mode: celery -A app.celery worker -P solo --without-gossip --without-mingle -Q dataset,generation,mail,ops_trace --loglevel=INFO Either way, once executed, you should see the Celery banner in the console and logs showing it connected to Redis and is waiting for tasks. The worker will handle tasks like indexing documents for the knowledge base, sending emails (for invites or notifications), and other asynchronous operations from Dify. Keep this terminal open as well. Now you have two terminals: one running the Flask API, and one running Celery worker.
  5. Set up and start the Frontend (Web) service: The Dify web interface is a Next.js application (Node.js + React). To run it from source, you need Node.js (and npm/pnpm) installed on your machine.
    • Install Node.js (if not already). The recommended version is Node 18 LTS or above. Node 20+ should also work. Go to nodejs.org and download the LTS version for your OS, or use a version manager like nvm. Once installed, verify by running node -v.
    • Install pnpm (a package manager alternative to npm, which the project uses). You can install pnpm globally by running: npm install -g pnpm Ensure pnpm is installed by checking pnpm -v. It should output a version number.
    • Navigate to the dify/web directory in a new terminal (or reuse one if you stopped the API for a moment, but better to use a separate one so everything can run simultaneously). For example: cd ../web # from the project root or adjust path accordingly
    • Install frontend dependencies: pnpm install --frozen-lockfile This will download all the JavaScript/React dependencies needed for the Dify front-end. Using --frozen-lockfile ensures it uses the exact versions specified by the project (from pnpm-lock.yaml). It may take a minute or two.
    • Configure the frontend environment: Unlike the backend which used a .env file, the frontend expects a .env.local file for custom configuration. Create one by copying the example: cp .env.example .env.local Open .env.local in an editor. We need to set it up to know how to reach the backend API:
      • Ensure NEXT_PUBLIC_DEPLOY_ENV=DEVELOPMENT (since we are running it locally for development; if you intended a production build, you’d set PRODUCTION and later build differently, but for our guide we’ll assume development mode).
      • Ensure NEXT_PUBLIC_EDITION=SELF_HOSTED (this tells the UI that this is a self-hosted instance, not the official cloud).
      • API URLs: By default, the example might have something like NEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/api and NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api. These URLs are how the web frontend will call the backend. We need them to match where our Flask API is serving. We started the Flask API on port 5001, and the Dify API has two main endpoint prefixes: one for the console (admin & internal UI calls) and one for the app (end-user facing API). The values above with localhost:5001 are correct if everything is running on the same machine. If your API is on a different host or you changed ports, update accordingly.
        • If you plan to access the frontend from a different computer (say you run the web on a server and open the UI in your local browser), make sure to use the server’s IP or domain instead of localhost in these URLs. For example, http://<server-ip>:5001/console/api.
      • The rest (Sentry config, etc.) can be left blank or default.
        Save the .env.local file after making these edits.
    • Run the development build for the web app: pnpm dev This will start the Next.js development server. (If there’s a separate build step needed like pnpm build followed by pnpm start for production, that’s only if you want an optimized build. For development, pnpm dev is fine as it auto-reloads on changes.)
      You should see output indicating the web server is running, typically on http://localhost:3000 by default. It will say “Ready on http://localhost:3000” or similar.
  6. Access the Dify interface (source-run): Now, open a browser on your machine and go to http://localhost:3000. This is the Next.js app we just launched. It should show the Dify interface. Since this is the first time, it may automatically redirect you to an installation page (similar to the Docker method) where you create the admin account. If so, follow the same steps: create your admin login. (If it doesn’t prompt, perhaps the database already has admin from a previous run; but assuming a fresh DB, you’ll do the setup). After the admin is set, you can log in and use Dify normally. The experience using it should be the same as the Docker deployment, except that you are running each component manually. As you use the interface, keep an eye on the terminal running the API server and the one running the worker – you’ll see logs appearing there for various actions (e.g., API calls, background tasks). This can be insightful for debugging or just understanding how Dify works internally.
  7. Managing the source-based setup: With everything running in separate terminals, it’s a bit more involved to manage, but offers flexibility:
    • If you make changes to the code (if you’re a developer tweaking Dify), the Next.js front-end will auto-reload, and the Flask back-end in debug mode will auto-restart when it detects changes. This is great for development. For production use, you would not run these dev servers; instead, you’d build the frontend (pnpm build and serve the static build) and run the backend with a production WSGI server. That, however, is beyond our scope here.
    • To stop the services, you can simply Ctrl+C in each terminal. The order shouldn’t matter too much, but you’d typically stop the web server, then the API, then the worker. The Docker containers for Postgres/Redis will still be running until you stop them with docker compose down. If you want to shut down completely, run docker compose -f docker-compose.middleware.yaml down in the dify/docker directory to stop the databases.
    • Starting everything up again requires re-running the commands for API (flask run...), worker (celery ...), and frontend (pnpm dev), plus ensuring the Docker services are up. This is why for a permanent deployment, Docker Compose (Method 1) is simpler – it bundles all these into one step. But for development or debugging, running from source is invaluable.

In summary, the source method shows that Dify’s architecture includes multiple moving parts: a database (Postgres), a vector DB (Weaviate), a cache broker (Redis), a Python API, a Python worker, and a Node.js frontend. Docker Compose automated all that for us, whereas running from source we do it manually. Unless you have a specific need to modify code or cannot use Docker, most users will stick with the Docker deployment for convenience.

Now that Dify is up and running (by either method), let’s cover some post-installation tips and next steps to make the most of your self-hosted Dify.

Post-Installation Configuration and Tips

  • Setting up Model Providers: After installation, you should configure at least one AI model provider so your apps can actually generate text or responses. In the Dify web console, go to the settings or providers section. Common setup:
    • For OpenAI: obtain an API key from your OpenAI account, then in Dify, add a provider “OpenAI” and paste the API key. You can then choose which model (GPT-3.5, GPT-4, etc.) to use for your applications.
    • For Azure OpenAI: you’d need your endpoint and key, similarly configurable.
    • For open-source models: Dify might support connecting to local models (perhaps through Hugging Face Transformers or via API endpoints). This may require additional setup, like running a local model server. Check Dify’s documentation for integrating local models if you’re interested. The built-in Weaviate vector DB suggests you can use local embeddings and search, but the actual language generation typically still comes from an LLM endpoint or service that you configure.
  • Email and Notifications: If you plan to invite team members to your Dify instance or use any feature that sends emails (like user invites or password resets), you should configure an email SMTP server. By default, Dify might not send emails unless configured. Look in the .env (for Docker) or settings for SMTP host, port, user, pass. You can use services like Gmail SMTP (for small scale) or a transactional email service. Setting this up ensures invite links or other notifications reach the recipients. If you skip this, features like “invite user by email” will produce an invitation link for you to manually share (as noted in Dify’s docs).
  • Using a Custom Domain: If you installed on a remote server and want to use a nice URL (say, dify.yourcompany.com), you can do so by pointing your domain’s DNS to your server’s IP. Then, you should adjust the environment variables in Dify for the web URLs:
    • In Docker, variables like CONSOLE_WEB_URL, CONSOLE_API_URL, APP_WEB_URL, APP_API_URL define the expected URLs. You would set those to use your domain (e.g., https://dify.yourcompany.com with appropriate paths). You will also want to handle HTTPS. One approach is to integrate with a reverse proxy like Nginx or Caddy on the host, or even use Let’s Encrypt for SSL. The Docker setup’s Nginx container could potentially be configured for SSL by mounting certificates. Alternatively, you might run Dify’s Docker with HTTP and put another proxy in front that terminates SSL. This can get technical, but it’s doable. Many choose to use a service like Cloudflare Tunnel or Nginx Proxy Manager to simplify exposing the service with SSL.
  • Resource Monitoring: As you use Dify, monitor your server’s resources. Running multiple containers (especially memory-heavy ones like a vector DB and Node.js) can use a few GB of RAM. If you only had exactly 4 GB on the server, you might be running close to the limit when everything is active. Swap space (on Linux) or increasing RAM might be needed for heavy usage.
  • Security: Since your Dify instance might be accessible on the web, secure it:
    • Change default passwords (for database, etc.) if you haven’t.
    • Make sure the admin account password is strong and that you don’t expose the instance to the public before setting up an admin.
    • Consider enabling HTTPS (as mentioned above) to encrypt traffic, especially if you’ll use it over the internet.
    • Keep Dify updated – new releases may fix security issues or bugs. Updating is as simple as pulling new images or code as described.
    • If you open your Dify to other users, use the member management features to control who can access what. Perhaps limit admin privileges to yourself and give others basic roles.
  • Exploring Dify Features: With the platform running, take advantage of all its features:
    • Try creating a visual workflow: Dify allows you to orchestrate multiple steps, not just a single prompt. You can chain model calls, apply logic, and incorporate tools. For example, you could make an AI that first searches a knowledge base, then answers a question. The interface will let you drag nodes representing these actions.
    • Utilize the Knowledge Base (Datasets): Upload some documents or FAQs via the Knowledge section. Dify will index them (the worker does this) so that your AI apps can use this information. This is the RAG capability – retrieving relevant info from your data and feeding it into the prompt so the AI can give informed answers.
    • Check out Plugins or Extensions: Dify may have a plugin system (there’s mention of a plugin marketplace in some Dify materials). If available, you could extend functionality (for instance, connecting to third-party APIs or adding custom model providers). As an admin of a self-hosted Dify, you can install and manage these.
    • Monitor the Logs and Usage: Dify likely provides some usage stats – e.g., number of requests, token counts, etc., and logs of AI responses. This is useful to refine your prompts and see how the AI is performing. You can annotate or flag responses to improve your application iteratively.
  • Learning Resources: If you’re new to building with AI and Dify, plenty of resources can help you get the most out of it. Dify’s official documentation is a great place to start for specific how-tos on creating applications. Additionally, there are courses on AI prompt engineering and workflow design on platforms like Udemy that can deepen your understanding. For example, a course on “Building AI Chatbots without Coding” or “Prompt Engineering Masterclass” might pair well with practicing in Dify. While we can’t link directly here, a quick search on Udemy or YouTube for Dify tutorials might yield some walkthrough videos. Taking a structured course or following a demo project can accelerate your mastery of the platform and AI development in general.
  • Community and Support: Since Dify is open-source, there’s a community of users and developers. If you encounter issues or have questions, you can look for:
    • Dify’s GitHub repository (check the “Issues” section for known bugs or to ask questions).
    • Community forums or Discord/Slack channels if available (some open-source projects have these for support).
    • The official Dify documentation FAQ section – it covers common problems (like CORS issues if you change the domain, or troubleshooting the installation).
    • Remember that self-hosting means you’re the admin – so you’ll be the one debugging if something goes wrong. But the collective knowledge of the community is there to help when you get stuck.

Conclusion

By following this guide, you now have your very own instance of Dify up and running! You’ve learned about Dify’s purpose and features, ensured your environment was ready, and stepped through installation using Docker (with an insight into manual setup as well). With Dify self-hosted, you are in control – you can build private AI solutions tailored to your needs, whether it’s an intelligent customer support chatbot, a creative writing assistant, or an internal tool to boost your team’s productivity. The platform’s visual workflow builder and support for multiple AI models means you have a powerful toolkit at your fingertips without needing to code everything from scratch.

As a next step, we encourage you to experiment: create an application in Dify, test out prompt ideas, integrate your own data, and maybe even invite a colleague to try it out (since you can host it for multiple users). The more you play with it, the more you’ll discover what’s possible. And don’t forget to keep an eye on Dify’s updates – new features like plugin support or improvements in the AI engine integration are being rolled out as the community grows.

Happy building with Dify, and welcome to the world of self-hosted AI applications! With this setup, you have the freedom to innovate at your own pace, with no cloud limitations holding you back. Enjoy your journey into generative AI development, and have fun creating amazing AI-powered apps using Dify.

-Uncategorized
-,