Learning Center

Anthropic Claude API: A Practical Guide

October 7, 2024 by acorn-labs

What Is Anthropic Claude?

Claude is a large language model (LLM) developed by Anthropic, able to understand and generate human-like text based on a vast corpus of data. Named after Claude Shannon, the father of information theory, this model aims to set new standards in AI language processing by leveraging ethical AI principles and enhancing the safety of automated interactions.

Claude is considered a state of the art LLM, competitive with industry leaders like OpenAI’s GPT-4. It enables applications ranging from text summarization, content creation, code creation, and conversational agents. With extensive safety features, Claude minimizes risks associated with AI misuse and focuses on responsible deployment, ensuring that AI advancements align positively with human values.

This is part of a series of articles about Anthropic Claude

In this article:

What Is the Claude API? {#what-is-the-claude-api}

The Claude API, developed by Anthropic, allows developers to integrate Claude’s advanced language models into their applications for a wide range of tasks, such as text generation, summarization, and conversational agents. The API supports various models, providing flexibility depending on the complexity and speed required for the task.

It is available with scalable access options, including pay-as-you-go pricing and higher-volume custom plans for enterprise use. The API is designed to prioritize safety and ethical use, ensuring secure and trustworthy deployments.

Features of the Claude API:

  • Text and code generation: Create detailed responses, generate code, debug, and more.
  • 200K token context window: Handle large datasets and long conversations effectively.
  • Tool integration: Use Claude to interact with external tools for more dynamic capabilities.
  • High security: SOC 2 Type II compliance and HIPAA options for handling sensitive data.
  • SDK support: Available for Python and TypeScript for easy integration.
  • Low hallucination rates: Ensures more accurate and reliable responses in complex tasks​

Claude API Pricing {#claude-api-pricing}

The pricing for the Claude API by Anthropic is based on a pay-as-you-go model, with rates that vary depending on the complexity of the model used:

Claude 3.5 Sonnet:

  • $3 per million input tokens
  • $15 per million output tokens
  • $3.75 per million tokens for prompt caching write
  • $0.30 per million tokens for prompt caching read
  • Supports up to a 200,000 token context window, suitable for complex tasks like multi-step workflows and customer support.

Claude 3 Opus:

  • $15 per million input tokens
  • $75 per million output tokens
  • $18.75 per million tokens for prompt caching write
  • $1.50 per million tokens for prompt caching read
  • 200,000 token context window, best suited for highly complex tasks, such as in-depth research, advanced coding, and strategic analysis.

Claude 3 Haiku:

  • $0.25 per million input tokens (fastest, most cost-effective model)
  • $1.25 per million output tokens
  • $0.30 per million tokens for prompt caching write
  • $0.03 per million tokens for prompt caching read
  • Also supports a 200K token context window, focused on lightweight and fast actions​.

**Claude 3 Sonnet **(legacy):

  • $3 per million input tokens
  • $15 per million output tokens
  • 200,000 token context window.

Claude Instant 1.2:

  • $1.63 per million input tokens
  • $5.51 per million output tokens
  • This faster, more cost-efficient model is designed for tasks requiring quick responses, such as casual dialogue and document summarization.

Claude API Rate Limits {#claude-api-rate-limits}

Anthropic enforces rate limits on the Claude API to ensure fair usage and prevent misuse. These limits include both usage limits and rate limits.

Usage Limits

Usage limits are set based on different tiers that determine the maximum cost an organization can incur monthly. As you reach certain thresholds in usage, your organization can automatically advance to higher tiers, which allow for more extensive use. Each tier has specific requirements:

  • Free tier: Allows up to $10 of API usage per month.
  • Build Tier 1: Requires a 5 deposit and allows up to 100 of usage per month.
  • Build Tier 2: Requires a 40 deposit with a 7-day wait period after the first purchase, allowing up to 500 of usage per month.
  • Build Tier 3: Requires a 200 deposit with a 7-day wait period, allowing up to 1,000 of usage per month.
  • Build Tier 4: Requires a 400 deposit with a 14-day wait period, allowing up to 5,000 of usage per month.
  • Scale tier: This is a custom tier with no predefined limits, available by contacting sales.

Once you reach the monthly usage limit of your tier, you must either wait for the next month or qualify for the next tier to continue using the API.

Rate Limits

Rate limits are applied to control the number of requests and tokens used over specific time intervals, helping to manage the API’s load. These limits vary by model and tier:

  • Requests per minute (RPM): Limits the number of requests that can be made in a minute.
  • Tokens per minute (TPM): Limits the number of tokens processed in a minute.
  • Tokens per day (TPD): Limits the total number of tokens that can be used in a day.

For example, the Claude 3.5 Sonnet model allows 5 requests per minute, 20,000 tokens per minute, and 300,000 tokens per day in the Free tier. Higher tiers may offer increased limits.

If a rate limit is exceeded, the API will return a 429 error, indicating that you have surpassed the allowed usage. The API response will include headers showing the current usage and when the limits will reset, allowing users to manage their requests accordingly.

Getting Started with Anthropic Claude API {#getting-started-with-anthropic-claude-api}

To begin using the Anthropic Claude API, follow these steps to set up your environment and make your first API call.

Start with the Workbench

Before diving into code, it’s recommended to start with the Workbench, a web-based interface within the Anthropic Console that allows you to experiment with Claude’s capabilities interactively.

  1. Log into the Anthropic Console and navigate to the Workbench.

  2. Ask Claude a question by typing it into the **User **section. For example, you could ask, "Why is the Earth round?"

  3. Run the query and observe the response on the right side. You can tweak the response format by setting a System Prompt. For example, if you want Claude to respond in a playful rapper form, you can use the system prompt:

     ```
     You are a hip rapper. Respond with clever rap lines.
     ```
    

    When you click Run again, Claude’s response will be adjusted according to the prompt.

  4. Convert your Workbench session into code by clicking Get Code. This will generate the Python or TypeScript code that replicates your session, which you can then integrate into your application.

Install the SDK

Next, install the necessary SDK to interface with the API. Anthropic provides SDKs for Python (3.7+) and TypeScript (4.5+).

Python:

  1. Create a virtual environment to manage your dependencies:

     ```
     python -m venv claude-env
     ```
    
  2. Activate the virtual environment:

     On macOS or Linux:
    
    
         ```
         source claude-env/bin/activate
         ```
    
    
    
     On Windows:
    
    
         ```
         claude-envScriptsactivate
         ```
    
  3. Install the Anthropic SDK:

     ```
     pip install anthropic
    
     ```
    

Set Your API Key

Each API call requires an API key for authentication. The SDK expects this key to be set as an environment variable named ANTHROPIC_API_KEY.

macOS and Linux:

    export ANTHROPIC_API_KEY='My-API-key'

Windows:

    set ANTHROPIC_API_KEY='My-API-key'

Alternatively, you can pass the API key directly to the client when initializing it in your code.

Claude API Examples {#claude-api-examples}

Basic Request and Response

To make a basic request to the Claude API and receive a response, you can use the following code snippet. This example shows how to send a simple message to Claude and receive a reply.

Python example:

    import anthropic

    # Initialize the Claude client
    client = anthropic.Anthropic()

    # Send a basic message to Claude
    message = client.messages.create(
        model="claude-3-5-sonnet-20240620",
        max_tokens=1024,
        messages=[
            {"role": "user", "content": "Hey, Claude"}
        ],
    )

    # Print the response
    print(message.content)

Response example:

    {
      "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "text",
          "text": "Hey there!"
        }
      ],
      "model": "claude-3-5-sonnet-20240620",
      "stop_reason": "end_turn",
      "usage": {
        "input_tokens": 12,
        "output_tokens": 6
      }
    }

In this example, the user sends a message saying "Hello, Claude," and the API responds with "Hello!" This basic interaction demonstrates how to send a simple prompt and receive a straightforward reply.

Multiple Conversational Turns

The Claude API supports multiple conversational turns by maintaining a stateless API, meaning that each request must include the full conversation history. This allows developers to build complex interactions over time.

Python example:

    import anthropic

    # Initialize the Claude client
    client = anthropic.Anthropic()

    # Send a series of messages to create a conversation
    message = client.messages.create(
        model="claude-3-5-sonnet-20240620",
        max_tokens=1024,
        messages=[
            {"role": "user", "content": "Hey, Claude"},
            {"role": "assistant", "content": "Hey there!"},
            {"role": "user", "content": "Explain what is an LLM."}
        ],
    )

    # Print the response
    print(message.content)

Response example:

    {
      "id": "msg_018gCsTGsXkYJVqYPxTgDHBU",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "text",
          "text": "Sure, here's a description of a large language model (LLM)..."
        }
      ],
      "stop_reason": "end_turn",
      "usage": {
        "input_tokens": 50,
        "output_tokens": 500
      }
    }

This example builds a conversation with Claude, where the user asks multiple questions, and Claude responds accordingly, maintaining the context of the conversation.

Customizing the Response

You can influence Claude’s response by pre-filling part of the response content. This is particularly useful when you want to guide Claude towards a specific type of answer, such as multiple-choice responses.

Python Example:

    import anthropic

    # Initialize the Claude client
    client = anthropic.Anthropic()

    # Send a multiple-choice question and guide Claude's response
    message = client.messages.create(
        model="claude-3-5-sonnet-20240620",
        max_tokens=1,
        messages=[
            {"role": "user", "content": "What is the Greek for fear? (A) Arachnea, (B) Philosophia, (C) Phobia"},
            {"role": "assistant", "content": "The answer is ("}
        ]
    )

    # Print the response
    print(message)

Response example:

    {
      "id": "msg_01Q8Faay6S7QPTvEUUQARt7h",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "text",
          "text": "C"
        }
      ],
      "model": "claude-3-5-sonnet-20240620",
      "stop_reason": "max_tokens",
      "stop_sequence": null,
      "usage": {
        "input_tokens": 50,
        "output_tokens": 1
      }
    }	

In this example, Claude is prompted with a multiple-choice question about the Latin name for ants, and the response is guided to produce a single character, "C," indicating the correct choice.

Vision

Claude can process both text and images in a request. For images, you need to encode them in base64 and include them in the API call. Claude supports various image formats like JPEG, PNG, GIF, and WebP.

Python example:

    import anthropic
    import base64
    import httpx

    # Download and encode the image
    image_url = "https://en.wikipedia.org/wiki/Wolf#/media/File:Eurasian_wolf_2.jpg"
    image_media_type = "image/jpeg"
    image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")

    # Initialize the Claude client and send an image for analysis
    client = anthropic.Anthropic()

    message = client.messages.create(
        model="claude-3-5-sonnet-20240620",
        max_tokens=1024,
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": image_media_type,
                            "data": image_data,
                        },
                    },
                    {"type": "text", "text": "What type of animal is in this image?"}
                ],
            }
        ],
    )

    # Print the response
    print(message)

Response example:

    {
      "id": "msg_01EcyWo6m4hyW8KHs2y2pei5",
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "text",
          "text": "This image shows a wolf, specifically a Eurasian wolf at Polar Zoo in Bardu, Norway. The wolf is shown walking on the snow, with its paws partially covered by the snow. The image is focused on capturing the general idea and features of a wolf."
        }
      ],
      "model": "claude-3-5-sonnet-20240620",
      "stop_reason": "end_turn",
      "stop_sequence": null,
      "usage": {
        "input_tokens": 2000,
        "output_tokens": 100
      }
    }

Build LLM Applications with Claude and Acorn

To see what you can start building today with GPTScript, visit our docs at https://gptscript-ai.github.io/knowledge/. For a great example of RAG at work on GPTScript check out our blog GPTScript Knowledge Tool v0.3 Introduces One-Time Configuration for Embedding Model Providers.

Releated Articles