♊️ GemiNews 🗞️

Demo 1: Embeddings + Recommendation Demo 2: Bella RAGa Demo 3: NewRetriever Demo 4: Assistant function calling

🗞️Getting Started with Claude 3 on Google Cloud

🗿Semantically Similar Articles (by :title_embedding)

Getting Started with Claude 3 on Google Cloud

2024-04-01 - Vaibhav Malpani (from Google Cloud - Medium)

Google Cloud recently announced that Anthropic’s Claude 3 Models will be available on Google Cloud (Sonnet and Haiku for now), and Opus will be added in coming weeks.What is Claude 3?Claude 3 comes with 3 state-of-the-art models: Opus, Sonnet and Haiku.Opus: Excels in complex tasks understanding new situations with impressive human-like ability. It pushes the boundaries of what AI can do.Sonnet: Balances performance and cost, well-suited for businesses needing fast and reliable performance.Haiku: Super fast and small, giving lightning quick answers to questions. This lets you create AI that feels like talking to a real person.Comparison of the Claude 3 models with GPT and Gemini Models.Cost Comparison between 3 Models:How to get started?Go to Model Garden tab in Vertex AI for Sonnet or HaikuClick on Enable, Fill the basic details about your Organization or just about your self.Once Step 2 is done, wait for 2–3 minutes to get the model enabled.You can now view the code for interacting with Claude 3 models. It has a lot of examples like Text input, Image input, Streaming responses, Calling via API, Calling via SDK.Steps Interact with Claude 3 using SDK!pip3 install anthropic[vertex]MODEL = "claude-3-sonnet@20240229" #for Haiku claude-3-haiku@20240307REGION = "us-central1"PROJECT_ID = "[your-project-id]"import vertexaiimport jsonvertexai.init(project=PROJECT_ID, location=REGION)1. Text Inputfrom anthropic import AnthropicVertexclient = AnthropicVertex(region=REGION, project_id=PROJECT_ID)message = client.messages.create( max_tokens=1024, messages=[ { "role": "user", "content": "Send me a recipe for Pizza.", } ], model=MODEL,)data = json.loads(message.model_dump_json(indent=2))["content"][0]print(data["text"])Query: Send me a recipe for Pizza.Response:2. Single Image InputImage Used:Image 1import base64import httpxfrom anthropic import AnthropicVertexclient = AnthropicVertex(region=REGION, project_id=PROJECT_ID)image1_url = "https://cache.getarchive.net/Prod/thumb/cdn12/L3Bob3RvLzIwMTYvMTIvMzEvdHJhZmZpYy1qYW0tdHJhZmZpYy1pbmRpYS10cmFuc3BvcnRhdGlvbi10cmFmZmljLTFlNDJiZi0xMDI0LmpwZw%3D%3D/1280/720/jpg"image1_media_type = "image/jpeg"image1_data = base64.b64encode(httpx.get(image1_url).content).decode("utf-8")message = client.messages.create( max_tokens=1024, messages=[ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": image1_media_type, "data": image1_data, }, }, {"type": "text", "text": "Describe the image and get the location and weather."}, ], } ], model=MODEL,)data = json.loads(message.model_dump_json(indent=2))["content"][0]print(data["text"])Query: Describe the image and get the location and weather.Response:3. Multi Image Input:Images Used:Image 1Image 2import base64import httpxfrom anthropic import AnthropicVertexclient = AnthropicVertex(region=REGION, project_id=PROJECT_ID)image1_url = "https://parkplus.io/_next/image?url=https%3A%2F%2Fstrapi-file-uploads.s3.ap-south-1.amazonaws.com%2Fopen_top_cars_2da902c4b3.jpg&w=1920&q=75"image2_url = "https://images.pexels.com/photos/3817871/pexels-photo-3817871.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=2"image1_media_type = "image/jpeg"image1_data = base64.b64encode(httpx.get(image1_url).content).decode("utf-8")image2_data = base64.b64encode(httpx.get(image2_url).content).decode("utf-8")message = client.messages.create( max_tokens=1024, messages=[ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": image1_media_type, "data": image1_data, }, }, { "type": "image", "source": { "type": "base64", "media_type": image1_media_type, "data": image2_data, }, }, {"type": "text", "text": "What are the similarities and differences between these two images"}, ], } ], model=MODEL,)data = json.loads(message.model_dump_json(indent=2))["content"][0]print(data["text"])Query: What are the similarities and differences between these two imagesResponse:Conclusion:Claude 3 has Capabilities like:Performing complex cognitive tasks.Transcribing and analyzing static images.Code generationTranslating between various languages in real-time.Offers models with combination of speed and performance depending upon the use case.It is Secure, Capable and Reliable.If you liked this post, please Clap for it. Follow me if you want to read more such posts!Twitter: https://twitter.com/IVaibhavMalpaniLinkedIn: https://www.linkedin.com/in/ivaibhavmalpani/Getting Started with Claude 3 on Google Cloud was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

[Blogs] 🌎 https://medium.com/google-cloud/claude-3-on-google-cloud-20c65b308f01?source=rss----e52cf94d98af---4

🗿article.to_s

------------------------------
Title: Getting Started with Claude 3 on Google Cloud
[content]
Google Cloud recently announced that Anthropic’s Claude 3 Models will be available on Google Cloud (Sonnet and Haiku for now), and Opus will be added in coming weeks.What is Claude 3?Claude 3 comes with 3 state-of-the-art models: Opus, Sonnet and Haiku.Opus: Excels in complex tasks understanding new situations with impressive human-like ability. It pushes the boundaries of what AI can do.Sonnet: Balances performance and cost, well-suited for businesses needing fast and reliable performance.Haiku: Super fast and small, giving lightning quick answers to questions. This lets you create AI that feels like talking to a real person.Comparison of the Claude 3 models with GPT and Gemini Models.Cost Comparison between 3 Models:How to get started?Go to Model Garden tab in Vertex AI for Sonnet or HaikuClick on Enable, Fill the basic details about your Organization or just about your self.Once Step 2 is done, wait for 2–3 minutes to get the model enabled.You can now view the code for interacting with Claude 3 models. It has a lot of examples like Text input, Image input, Streaming responses, Calling via API, Calling via SDK.Steps Interact with Claude 3 using SDK!pip3 install anthropic[vertex]MODEL = "claude-3-sonnet@20240229" #for Haiku claude-3-haiku@20240307REGION = "us-central1"PROJECT_ID = "[your-project-id]"import vertexaiimport jsonvertexai.init(project=PROJECT_ID, location=REGION)1. Text Inputfrom anthropic import AnthropicVertexclient = AnthropicVertex(region=REGION, project_id=PROJECT_ID)message = client.messages.create(    max_tokens=1024,    messages=[        {            "role": "user",            "content": "Send me a recipe for Pizza.",        }    ],    model=MODEL,)data = json.loads(message.model_dump_json(indent=2))["content"][0]print(data["text"])Query: Send me a recipe for Pizza.Response:2. Single Image InputImage Used:Image 1import base64import httpxfrom anthropic import AnthropicVertexclient = AnthropicVertex(region=REGION, project_id=PROJECT_ID)image1_url = "https://cache.getarchive.net/Prod/thumb/cdn12/L3Bob3RvLzIwMTYvMTIvMzEvdHJhZmZpYy1qYW0tdHJhZmZpYy1pbmRpYS10cmFuc3BvcnRhdGlvbi10cmFmZmljLTFlNDJiZi0xMDI0LmpwZw%3D%3D/1280/720/jpg"image1_media_type = "image/jpeg"image1_data = base64.b64encode(httpx.get(image1_url).content).decode("utf-8")message = client.messages.create(    max_tokens=1024,    messages=[        {            "role": "user",            "content": [                {                    "type": "image",                    "source": {                        "type": "base64",                        "media_type": image1_media_type,                        "data": image1_data,                    },                },                {"type": "text", "text": "Describe the image and get the location and weather."},            ],        }    ],    model=MODEL,)data = json.loads(message.model_dump_json(indent=2))["content"][0]print(data["text"])Query: Describe the image and get the location and weather.Response:3. Multi Image Input:Images Used:Image 1Image 2import base64import httpxfrom anthropic import AnthropicVertexclient = AnthropicVertex(region=REGION, project_id=PROJECT_ID)image1_url = "https://parkplus.io/_next/image?url=https%3A%2F%2Fstrapi-file-uploads.s3.ap-south-1.amazonaws.com%2Fopen_top_cars_2da902c4b3.jpg&w=1920&q=75"image2_url = "https://images.pexels.com/photos/3817871/pexels-photo-3817871.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=2"image1_media_type = "image/jpeg"image1_data = base64.b64encode(httpx.get(image1_url).content).decode("utf-8")image2_data = base64.b64encode(httpx.get(image2_url).content).decode("utf-8")message = client.messages.create(    max_tokens=1024,    messages=[        {            "role": "user",            "content": [                {                    "type": "image",                    "source": {                        "type": "base64",                        "media_type": image1_media_type,                        "data": image1_data,                    },                },                {                    "type": "image",                    "source": {                        "type": "base64",                        "media_type": image1_media_type,                        "data": image2_data,                    },                },                {"type": "text", "text": "What are the similarities and differences between these two images"},            ],        }    ],    model=MODEL,)data = json.loads(message.model_dump_json(indent=2))["content"][0]print(data["text"])Query: What are the similarities and differences between these two imagesResponse:Conclusion:Claude 3 has Capabilities like:Performing complex cognitive tasks.Transcribing and analyzing static images.Code generationTranslating between various languages in real-time.Offers models with combination of speed and performance depending upon the use case.It is Secure, Capable and Reliable.If you liked this post, please Clap for it. Follow me if you want to read more such posts!Twitter: https://twitter.com/IVaibhavMalpaniLinkedIn: https://www.linkedin.com/in/ivaibhavmalpani/Getting Started with Claude 3 on Google Cloud was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
[/content]

Author: Vaibhav Malpani
PublishedDate: 2024-04-01
Category: Blogs
NewsPaper: Google Cloud - Medium
Tags: anthropic-claude, multimodal, generative-ai, gemini, google-cloud-platform
{"id"=>1592,
"title"=>"Getting Started with Claude 3 on Google Cloud",
"summary"=>nil,
"content"=>"

Google Cloud recently announced that Anthropic’s Claude 3 Models will be available on Google Cloud (Sonnet and Haiku for now), and Opus will be added in coming weeks.

\"\"

What is Claude 3?

Claude 3 comes with 3 state-of-the-art models: Opus, Sonnet and Haiku.

Opus: Excels in complex tasks understanding new situations with impressive human-like ability. It pushes the boundaries of what AI can do.

Sonnet: Balances performance and cost, well-suited for businesses needing fast and reliable performance.

Haiku: Super fast and small, giving lightning quick answers to questions. This lets you create AI that feels like talking to a real person.

Comparison of the Claude 3 models with GPT and Gemini Models.

\"\"

Cost Comparison between 3 Models:

\"\"

How to get started?

  1. Go to Model Garden tab in Vertex AI for Sonnet or Haiku
  2. Click on Enable, Fill the basic details about your Organization or just about your self.
  3. Once Step 2 is done, wait for 2–3 minutes to get the model enabled.
  4. You can now view the code for interacting with Claude 3 models. It has a lot of examples like Text input, Image input, Streaming responses, Calling via API, Calling via SDK.

Steps Interact with Claude 3 using SDK

!pip3 install anthropic[vertex]
MODEL = "claude-3-sonnet@20240229" #for Haiku claude-3-haiku@20240307
REGION = "us-central1"
PROJECT_ID = "[your-project-id]"

import vertexai
import json
vertexai.init(project=PROJECT_ID, location=REGION)

1. Text Input

from anthropic import AnthropicVertex

client = AnthropicVertex(region=REGION, project_id=PROJECT_ID)
message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Send me a recipe for Pizza.",
}
],
model=MODEL,
)
data = json.loads(message.model_dump_json(indent=2))["content"][0]
print(data["text"])

Query: Send me a recipe for Pizza.

Response:

\"\"
\"\"

2. Single Image Input

Image Used:

\"\"
Image 1
import base64

import httpx
from anthropic import AnthropicVertex

client = AnthropicVertex(region=REGION, project_id=PROJECT_ID)

image1_url = "https://cache.getarchive.net/Prod/thumb/cdn12/L3Bob3RvLzIwMTYvMTIvMzEvdHJhZmZpYy1qYW0tdHJhZmZpYy1pbmRpYS10cmFuc3BvcnRhdGlvbi10cmFmZmljLTFlNDJiZi0xMDI0LmpwZw%3D%3D/1280/720/jpg"
image1_media_type = "image/jpeg"
image1_data = base64.b64encode(httpx.get(image1_url).content).decode("utf-8")


message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": image1_media_type,
"data": image1_data,
},
},
{"type": "text", "text": "Describe the image and get the location and weather."},
],
}
],
model=MODEL,
)
data = json.loads(message.model_dump_json(indent=2))["content"][0]
print(data["text"])

Query: Describe the image and get the location and weather.

Response:

\"\"

3. Multi Image Input:

Images Used:

\"\"
Image 1
\"\"
Image 2
import base64

import httpx
from anthropic import AnthropicVertex

client = AnthropicVertex(region=REGION, project_id=PROJECT_ID)

image1_url = "https://parkplus.io/_next/image?url=https%3A%2F%2Fstrapi-file-uploads.s3.ap-south-1.amazonaws.com%2Fopen_top_cars_2da902c4b3.jpg&w=1920&q=75"
image2_url = "https://images.pexels.com/photos/3817871/pexels-photo-3817871.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=2"
image1_media_type = "image/jpeg"
image1_data = base64.b64encode(httpx.get(image1_url).content).decode("utf-8")
image2_data = base64.b64encode(httpx.get(image2_url).content).decode("utf-8")


message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": image1_media_type,
"data": image1_data,
},
},
{
"type": "image",
"source": {
"type": "base64",
"media_type": image1_media_type,
"data": image2_data,
},
},
{"type": "text", "text": "What are the similarities and differences between these two images"},
],
}
],
model=MODEL,
)
data = json.loads(message.model_dump_json(indent=2))["content"][0]
print(data["text"])

Query: What are the similarities and differences between these two images

Response:

\"\"

Conclusion:

Claude 3 has Capabilities like:

  1. Performing complex cognitive tasks.
  2. Transcribing and analyzing static images.
  3. Code generation
  4. Translating between various languages in real-time.
  5. Offers models with combination of speed and performance depending upon the use case.
  6. It is Secure, Capable and Reliable.

If you liked this post, please Clap for it. Follow me if you want to read more such posts!

Twitter: https://twitter.com/IVaibhavMalpani
LinkedIn: https://www.linkedin.com/in/ivaibhavmalpani/

\"\"

Getting Started with Claude 3 on Google Cloud was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

",
"author"=>"Vaibhav Malpani",
"link"=>"https://medium.com/google-cloud/claude-3-on-google-cloud-20c65b308f01?source=rss----e52cf94d98af---4",
"published_date"=>Mon, 01 Apr 2024 04:11:36.000000000 UTC +00:00,
"image_url"=>nil,
"feed_url"=>"https://medium.com/google-cloud/claude-3-on-google-cloud-20c65b308f01?source=rss----e52cf94d98af---4",
"language"=>nil,
"active"=>true,
"ricc_source"=>"feedjira::v1",
"created_at"=>Wed, 03 Apr 2024 14:28:20.760925000 UTC +00:00,
"updated_at"=>Mon, 13 May 2024 19:02:17.011535000 UTC +00:00,
"newspaper"=>"Google Cloud - Medium",
"macro_region"=>"Blogs"}
Edit this article
Back to articles