Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reliable and easy to setup way to deploy Crawl4ai #180

Open
sean-cofinance opened this issue Oct 18, 2024 · 2 comments
Open

Reliable and easy to setup way to deploy Crawl4ai #180

sean-cofinance opened this issue Oct 18, 2024 · 2 comments

Comments

@sean-cofinance
Copy link

Hey everyone,

The final step of development—deployment—is the most challenging. I'm sure many of you will agree with me.

Could someone share their experience on the best way to deploy Crawl4AI? Some options to consider are:

  • Using a big cloud provider like AWS, Azure, or GCP
  • Considering modern cloud platforms like Railway, Fly, or Vultr
  • Choosing between a single container or a swarm
  • Using Kubernetes
  • Configuring for a server or cluster

Thank you in advance for your answers and thoughts!

@chanmathew
Copy link

@sean-cofinance I've just recently set this up on Modal.com which was a pretty smooth exercise, here's my code if it helps:

import modal

# Install the necessary dependencies as custom container image which we will pass to our functions
crawler = modal.Image.debian_slim(python_version="3.10").pip_install_from_requirements("requirements.txt").run_commands(
    "apt-get update",
    "apt-get install -y software-properties-common",
    "apt-add-repository non-free",
    "apt-add-repository contrib",
    "playwright install-deps chromium",
    "playwright install chromium",
    "playwright install",
)

import asyncio
from crawl4ai import AsyncWebCrawler
import playwright
from typing import Optional, Union, List
from pydantic import BaseModel, Field
from fastapi import Header, HTTPException
from jwt import decode, PyJWTError
import os

app = modal.App("crawler")

class CrawlRequest(BaseModel):
    url: str
    bypass_cache: bool = Field(default=False)
    # other kwargs

# Define the function that will be executed in the container
@app.function(image=crawler)
@modal.web_endpoint(method="POST", docs=True)
async def crawl(request: CrawlRequest, authorization: str = Header(...)):
    # You will want to have your own authorization strategy here to protect your endpoint
    print(f"Crawling URL: {request}")
    # Create an instance of AsyncWebCrawler
    async with AsyncWebCrawler(verbose=True) as crawler:
        # Run the crawler on the given URL
        crawl_kwargs = request.dict(exclude_unset=True)
        try:
            result = await crawler.arun(**crawl_kwargs)
            print(result)
            return result
        except Exception as e:
            error_message = f"Error during crawling: {str(e)}"
            print(error_message)
            return {"error": error_message}

# Entrypoint that will be used to trigger the crawler when testing locally
@app.local_entrypoint()
async def main(url: str):
    result = crawl.remote(CrawlRequest(url=url))
    print(result)
    return result

My requirements.txt:

crawl4ai
asyncio
playwright
fastapi[standard]
pydantic
PyJWT

@sean-cofinance
Copy link
Author

@chanmathew, I can't wait to try this out today! Thank you so much. This is really intriguing, and I'm super excited about it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants