♊️ GemiNews 🗞️

Demo 1: Embeddings + Recommendation Demo 2: Bella RAGa Demo 3: NewRetriever Demo 4: Assistant function calling

🗞️Visualize PaLM-based LLM tokens

🗿Semantically Similar Articles (by :title_embedding)

Visualize PaLM-based LLM tokens

2024-02-05 - Guillaume Laforge (from Guillaume Laforge - Medium)

As I was working on tweaking the Vertex AI text embedding model in LangChain4j, I wanted to better understand how the textembedding-geckomodel tokenizes the text, in particular when we implement the Retrieval Augmented Generation approach.The various PaLM-based models offer a computeTokens endpoint, which returns a list of tokens (encoded in Base 64) and their respective IDs.Note: At the time of this writing, there’s no equivalent endpoint for Gemini models.So I decided to create a small application that lets users:input some text,select a model,calculate the number of tokens,and visualize them with some nice pastel colors.The available PaLM-based models are:textembedding-geckotextembedding-gecko-multilingualtext-bisontext-unicornchat-bisoncode-geckocode-bisoncodechat-bisonYou can try the application online.And also have a look at the source code on Github. It’s a Micronaut application. I serve the static assets as explained in my recent article. I deployed the application on Google Cloud Run, the easiest way to deploy a container, and let it auto-scale for you. I did a source based deployment, as explained at the bottom here.And voilà I can visualize my LLM tokens!Originally published at https://glaforge.dev on February 5, 2024.Visualize PaLM-based LLM tokens was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

[Blogs] 🌎 https://medium.com/google-cloud/visualize-palm-based-llm-tokens-8760b3122c0f?source=rss-431147437aeb------2

🗿article.to_s

------------------------------
Title: Visualize PaLM-based LLM tokens
[content]
As I was working on tweaking the Vertex AI text embedding model in LangChain4j, I wanted to better understand how the textembedding-geckomodel tokenizes the text, in particular when we implement the Retrieval Augmented Generation approach.The various PaLM-based models offer a computeTokens endpoint, which returns a list of tokens (encoded in Base 64) and their respective IDs.Note: At the time of this writing, there’s no equivalent endpoint for Gemini models.So I decided to create a small application that lets users:input some text,select a model,calculate the number of tokens,and visualize them with some nice pastel colors.The available PaLM-based models are:textembedding-geckotextembedding-gecko-multilingualtext-bisontext-unicornchat-bisoncode-geckocode-bisoncodechat-bisonYou can try the application online.And also have a look at the source code on Github. It’s a Micronaut application. I serve the static assets as explained in my recent article. I deployed the application on Google Cloud Run, the easiest way to deploy a container, and let it auto-scale for you. I did a source based deployment, as explained at the bottom here.And voilà I can visualize my LLM tokens!Originally published at https://glaforge.dev on February 5, 2024.Visualize PaLM-based LLM tokens was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
[/content]

Author: Guillaume Laforge
PublishedDate: 2024-02-05
Category: Blogs
NewsPaper: Guillaume Laforge - Medium
Tags: llm, google-cloud-platform, gcp-app-dev, vertex-ai, generative-ai-tools
{"id"=>19,
"title"=>"Visualize PaLM-based LLM tokens",
"summary"=>nil,
"content"=>"
\"\"

As I was working on tweaking the Vertex AI text embedding model in LangChain4j, I wanted to better understand how the textembedding-geckomodel tokenizes the text, in particular when we implement the Retrieval Augmented Generation approach.

The various PaLM-based models offer a computeTokens endpoint, which returns a list of tokens (encoded in Base 64) and their respective IDs.

Note: At the time of this writing, there’s no equivalent endpoint for Gemini models.

So I decided to create a small application that lets users:

  • input some text,
  • select a model,
  • calculate the number of tokens,
  • and visualize them with some nice pastel colors.

The available PaLM-based models are:

  • textembedding-gecko
  • textembedding-gecko-multilingual
  • text-bison
  • text-unicorn
  • chat-bison
  • code-gecko
  • code-bison
  • codechat-bison

You can try the application online.

And also have a look at the source code on Github. It’s a Micronaut application. I serve the static assets as explained in my recent article. I deployed the application on Google Cloud Run, the easiest way to deploy a container, and let it auto-scale for you. I did a source based deployment, as explained at the bottom here.

And voilà I can visualize my LLM tokens!

Originally published at https://glaforge.dev on February 5, 2024.

\"\"

Visualize PaLM-based LLM tokens was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.

",
"author"=>"Guillaume Laforge",
"link"=>"https://medium.com/google-cloud/visualize-palm-based-llm-tokens-8760b3122c0f?source=rss-431147437aeb------2",
"published_date"=>Mon, 05 Feb 2024 00:00:45.000000000 UTC +00:00,
"image_url"=>nil,
"feed_url"=>"https://medium.com/google-cloud/visualize-palm-based-llm-tokens-8760b3122c0f?source=rss-431147437aeb------2",
"language"=>nil,
"active"=>true,
"ricc_source"=>"feedjira::v1",
"created_at"=>Sun, 31 Mar 2024 21:41:10.233814000 UTC +00:00,
"updated_at"=>Mon, 13 May 2024 18:38:07.960433000 UTC +00:00,
"newspaper"=>"Guillaume Laforge - Medium",
"macro_region"=>"Blogs"}
Edit this article
Back to articles