♊️ GemiNews 🗞️
🏡
📰 Articles
🏷️ Tags
🧠 Queries
📈 Graphs
☁️ Stats
💁🏻 Assistant
Demo 1: Embeddings + Recommendation
Demo 2: Bella RAGa
Demo 3: NewRetriever
Demo 4: Assistant function calling
Editing article
Title
Summary
Content
<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*0iPZQvd0ZVDuJjeC.jpg" /></figure><p>This week <a href="https://github.com/langchain4j">LangChain4j</a>, the LLM orchestration framework for Java developers, released version <a href="https://github.com/langchain4j/langchain4j/releases/tag/0.26.1">0.26.1</a>, which contains my first significant contribution to the open source project: <strong>support for the Imagen image generation model</strong>.</p><p><strong>Imagen</strong> is a text-to-image diffusion model that was <a href="https://imagen.research.google/">announced</a> last year. And it recently upgraded to <a href="https://deepmind.google/technologies/imagen-2/">Imagen v2</a>, with even higher quality graphics generation. As I was curious to integrate it in some of my generative AI projects, I thought that would be a great first <a href="https://github.com/langchain4j/langchain4j/pull/456">contribution</a> to LangChain4j.</p><blockquote><strong><em>Caution:</em></strong><em> At the time of this writing, image generation is still only for allow-listed accounts.</em></blockquote><blockquote><em>Furthermore, to run the snippets covered below, you should have an account on Google Cloud Platform, created a project, configured a billing account, enabled the Vertex AI API, and authenticated with the gcloud SDK and the command: </em><em>gcloud auth application-default login.</em></blockquote><p>Now let’s dive in how to use Imagen v1 and v2 with LangChain4j in Java!</p><h3>Generate your first images</h3><p>In the following examples, I’m using the following constants, to point at my project details, the endpoint, the region, etc:</p><pre>private static final String ENDPOINT = "us-central1-aiplatform.googleapis.com:443";<br>private static final String LOCATION = "us-central1";<br>private static final String PROJECT = "YOUR_PROJECT_ID";<br>private static final String PUBLISHER = "google";</pre><p>First, we’re going to create an instance of the model:</p><pre>VertexAiImageModel imagenModel = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@005")<br> .maxRetries(2)<br> .withPersisting()<br> .build();</pre><p>There are 2 models you can use:</p><ul><li>imagegeneration@005 corresponds to Imagen 2</li><li>imagegeneration@002 is the previous version (Imagen 1)</li></ul><p>In this article, we’ll use both models. Why? Because currently Imagen 2 doesn’t support image editing, so we’ll have to use Imagen 1 for that purpose.</p><p>The configuration above uses withPersisting() to save the generated images in a temporary folder on your system. If you don't persist the image files, the content of the image is avaiable as Base 64 encoded bytes in the Images objects returned. You can also specify persistTo(somePath) to specify a particular directory where you want the generated files to be saved.</p><p>Let’s create our first image:</p><pre>Response<Image> imageResponse = imagenModel.generate(<br> "watercolor of a colorful parrot drinking a cup of coffee");</pre><p>The Response object wraps the created Image. You can get the Image by calling imageResponse.getContent(). And you can retrieve the URL of the image (if saved locally) with imageResponse.getContent().url(). The Base 64 encoded bytes can be retrieved with imageResponse.getContent().base64Data()</p><p>Some other tweaks to the model configuration:</p><ul><li>Specify the <strong>language</strong> of the prompt: language("ja") (if the language is not officially supported, it's usually translated back to English anyway).</li><li>Define a <strong>negative prompt</strong> with things you don’t want to see in the picture: negativePrompt("black feathers").</li><li>Use a particular <strong>seed</strong> to always generate the same image with the same seed: seed(1234L).</li></ul><p>So if you want to generate a picture of a pizza with a prompt in Japanese, but you don’t want to have pepperoni and pineapple, you could configure your model and generate as follows:</p><pre>VertexAiImageModel imagenModel = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@005")<br> .language("ja")<br> .negativePrompt("pepperoni, pineapple")<br> .maxRetries(2)<br> .withPersisting()<br> .build();<br><br>Response<Image> imageResponse = imagenModel.generate("ピザ"); // pizza</pre><h3>Image editing with Imagen 1</h3><p>With Imagen 1, you can <a href="https://cloud.google.com/vertex-ai/docs/generative-ai/image/edit-images?hl=en">edit</a> existing images:</p><ul><li><strong>mask-based editing:</strong> you can specify a mask, a black & white image where the white parts are the corresponding parts of the original image that should be edited,</li><li><strong>mask free editing:</strong> where you just give a prompt and let the model figure out what should be edited on its own or following the prompt.</li></ul><p>When generating and editing with Imagen 1, you can also configure the model to use a particular style (with Imagen 2, you just specify it in the prompt) with sampleImageStyle(VertexAiImageModel.ImageStyle.photograph):</p><p>- photograph<br>- digital_art<br>- landscape<br>- sketch<br>- watercolor<br>- cyberpunk<br>- pop_art</p><p>When editing an image, you may wish to decide how strong or not the modification should be, with .guidanceScale(100). Usually, between 0 and 20 or so, it's lightly edited, between 20 and 100 it's getting more impactful edits, and 100 and above it's the maximum edition level.</p><p>Let’s say I generated an image of a lush forrest (I’ll use that as my original image):</p><pre>VertexAiImageModel model = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@002")<br> .seed(19707L)<br> .sampleImageStyle(VertexAiImageModel.ImageStyle.photograph)<br> .guidanceScale(100)<br> .maxRetries(4)<br> .withPersisting()<br> .build();<br><br>Response<Image> forestResp = model.generate("lush forest");</pre><p>Now I want to edit my forrest to add a small red tree in the bottom of the image. I’m loading a black and white mask image with a white square at the bottom. And I pass the original image, the mask image, and the modification prompt, to the new edit() method:</p><pre>URI maskFileUri = getClass().getClassLoader().getResource("mask.png").toURI();<br><br>Response<Image> compositeResp = model.edit(<br> forestResp.content(), // original image to edit<br> fromPath(Paths.get(maskFileUri)), // the mask image<br> "red trees" // the new prompt<br>);</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*AfQ24hvdH9hoTLsT.jpg" /></figure><p>Another kind of editing you can do is to upscale an existing image. As far as I know, it’s only supported for Imagen v1 for now, so we’ll continue with that model.</p><p>In this example, we’ll generate an image of 1024x1024 pixels, and we’ll scale it to 4096x4096:</p><pre>VertexAiImageModel imagenModel = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@002")<br> .sampleImageSize(1024)<br> .withPersisting()<br> .persistTo(defaultTempDirPath)<br> .maxRetries(3)<br> .build();<br><br>Response<Image> imageResponse =<br> imagenModel.generate("A black bird looking itself in an antique mirror");<br><br>VertexAiImageModel imagenModelForUpscaling = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@002")<br> .sampleImageSize(4096)<br> .withPersisting()<br> .persistTo(defaultTempDirPath)<br> .maxRetries(3)<br> .build();<br><br>Response<Image> upscaledImageResponse =<br> imagenModelForUpscaling.edit(imageResponse.content(), "");</pre><p>And now you have a much bigger image!</p><h3>Conclusion</h3><p>That’s about it for image generation and editing with <strong>Imagen</strong> in <strong>LangChain4j</strong> today! Be sure to use LangChain4j <strong>v0.26.1</strong> which contains that new integration. And I’m looking forward to seeing the pictures you generate with it!</p><p><em>Originally published at </em><a href="https://glaforge.dev/posts/2024/02/01/image-generation-with-imagen-and-langchain4j/"><em>https://glaforge.dev</em></a><em> on February 1, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=61ca08ae6aac" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/image-generation-with-imagen-and-langchain4j-61ca08ae6aac">Image generation with Imagen and LangChain4j</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
Author
Link
Published date
Image url
Feed url
Guid
Hidden blurb
--- !ruby/object:Feedjira::Parser::RSSEntry title: Image generation with Imagen and LangChain4j url: https://medium.com/google-cloud/image-generation-with-imagen-and-langchain4j-61ca08ae6aac?source=rss-431147437aeb------2 author: Guillaume Laforge categories: - google-cloud-platform - imagen - java - langchain - generative-ai-use-cases published: 2024-02-01 00:00:35.000000000 Z entry_id: !ruby/object:Feedjira::Parser::GloballyUniqueIdentifier is_perma_link: 'false' guid: https://medium.com/p/61ca08ae6aac carlessian_info: news_filer_version: 2 newspaper: Guillaume Laforge - Medium macro_region: Blogs rss_fields: - title - url - author - categories - published - entry_id - content content: '<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*0iPZQvd0ZVDuJjeC.jpg" /></figure><p>This week <a href="https://github.com/langchain4j">LangChain4j</a>, the LLM orchestration framework for Java developers, released version <a href="https://github.com/langchain4j/langchain4j/releases/tag/0.26.1">0.26.1</a>, which contains my first significant contribution to the open source project: <strong>support for the Imagen image generation model</strong>.</p><p><strong>Imagen</strong> is a text-to-image diffusion model that was <a href="https://imagen.research.google/">announced</a> last year. And it recently upgraded to <a href="https://deepmind.google/technologies/imagen-2/">Imagen v2</a>, with even higher quality graphics generation. As I was curious to integrate it in some of my generative AI projects, I thought that would be a great first <a href="https://github.com/langchain4j/langchain4j/pull/456">contribution</a> to LangChain4j.</p><blockquote><strong><em>Caution:</em></strong><em> At the time of this writing, image generation is still only for allow-listed accounts.</em></blockquote><blockquote><em>Furthermore, to run the snippets covered below, you should have an account on Google Cloud Platform, created a project, configured a billing account, enabled the Vertex AI API, and authenticated with the gcloud SDK and the command: </em><em>gcloud auth application-default login.</em></blockquote><p>Now let’s dive in how to use Imagen v1 and v2 with LangChain4j in Java!</p><h3>Generate your first images</h3><p>In the following examples, I’m using the following constants, to point at my project details, the endpoint, the region, etc:</p><pre>private static final String ENDPOINT = "us-central1-aiplatform.googleapis.com:443";<br>private static final String LOCATION = "us-central1";<br>private static final String PROJECT = "YOUR_PROJECT_ID";<br>private static final String PUBLISHER = "google";</pre><p>First, we’re going to create an instance of the model:</p><pre>VertexAiImageModel imagenModel = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@005")<br> .maxRetries(2)<br> .withPersisting()<br> .build();</pre><p>There are 2 models you can use:</p><ul><li>imagegeneration@005 corresponds to Imagen 2</li><li>imagegeneration@002 is the previous version (Imagen 1)</li></ul><p>In this article, we’ll use both models. Why? Because currently Imagen 2 doesn’t support image editing, so we’ll have to use Imagen 1 for that purpose.</p><p>The configuration above uses withPersisting() to save the generated images in a temporary folder on your system. If you don't persist the image files, the content of the image is avaiable as Base 64 encoded bytes in the Images objects returned. You can also specify persistTo(somePath) to specify a particular directory where you want the generated files to be saved.</p><p>Let’s create our first image:</p><pre>Response<Image> imageResponse = imagenModel.generate(<br> "watercolor of a colorful parrot drinking a cup of coffee");</pre><p>The Response object wraps the created Image. You can get the Image by calling imageResponse.getContent(). And you can retrieve the URL of the image (if saved locally) with imageResponse.getContent().url(). The Base 64 encoded bytes can be retrieved with imageResponse.getContent().base64Data()</p><p>Some other tweaks to the model configuration:</p><ul><li>Specify the <strong>language</strong> of the prompt: language("ja") (if the language is not officially supported, it's usually translated back to English anyway).</li><li>Define a <strong>negative prompt</strong> with things you don’t want to see in the picture: negativePrompt("black feathers").</li><li>Use a particular <strong>seed</strong> to always generate the same image with the same seed: seed(1234L).</li></ul><p>So if you want to generate a picture of a pizza with a prompt in Japanese, but you don’t want to have pepperoni and pineapple, you could configure your model and generate as follows:</p><pre>VertexAiImageModel imagenModel = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@005")<br> .language("ja")<br> .negativePrompt("pepperoni, pineapple")<br> .maxRetries(2)<br> .withPersisting()<br> .build();<br><br>Response<Image> imageResponse = imagenModel.generate("ピザ"); // pizza</pre><h3>Image editing with Imagen 1</h3><p>With Imagen 1, you can <a href="https://cloud.google.com/vertex-ai/docs/generative-ai/image/edit-images?hl=en">edit</a> existing images:</p><ul><li><strong>mask-based editing:</strong> you can specify a mask, a black & white image where the white parts are the corresponding parts of the original image that should be edited,</li><li><strong>mask free editing:</strong> where you just give a prompt and let the model figure out what should be edited on its own or following the prompt.</li></ul><p>When generating and editing with Imagen 1, you can also configure the model to use a particular style (with Imagen 2, you just specify it in the prompt) with sampleImageStyle(VertexAiImageModel.ImageStyle.photograph):</p><p>- photograph<br>- digital_art<br>- landscape<br>- sketch<br>- watercolor<br>- cyberpunk<br>- pop_art</p><p>When editing an image, you may wish to decide how strong or not the modification should be, with .guidanceScale(100). Usually, between 0 and 20 or so, it's lightly edited, between 20 and 100 it's getting more impactful edits, and 100 and above it's the maximum edition level.</p><p>Let’s say I generated an image of a lush forrest (I’ll use that as my original image):</p><pre>VertexAiImageModel model = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@002")<br> .seed(19707L)<br> .sampleImageStyle(VertexAiImageModel.ImageStyle.photograph)<br> .guidanceScale(100)<br> .maxRetries(4)<br> .withPersisting()<br> .build();<br><br>Response<Image> forestResp = model.generate("lush forest");</pre><p>Now I want to edit my forrest to add a small red tree in the bottom of the image. I’m loading a black and white mask image with a white square at the bottom. And I pass the original image, the mask image, and the modification prompt, to the new edit() method:</p><pre>URI maskFileUri = getClass().getClassLoader().getResource("mask.png").toURI();<br><br>Response<Image> compositeResp = model.edit(<br> forestResp.content(), // original image to edit<br> fromPath(Paths.get(maskFileUri)), // the mask image<br> "red trees" // the new prompt<br>);</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*AfQ24hvdH9hoTLsT.jpg" /></figure><p>Another kind of editing you can do is to upscale an existing image. As far as I know, it’s only supported for Imagen v1 for now, so we’ll continue with that model.</p><p>In this example, we’ll generate an image of 1024x1024 pixels, and we’ll scale it to 4096x4096:</p><pre>VertexAiImageModel imagenModel = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@002")<br> .sampleImageSize(1024)<br> .withPersisting()<br> .persistTo(defaultTempDirPath)<br> .maxRetries(3)<br> .build();<br><br>Response<Image> imageResponse =<br> imagenModel.generate("A black bird looking itself in an antique mirror");<br><br>VertexAiImageModel imagenModelForUpscaling = VertexAiImageModel.builder()<br> .endpoint(ENDPOINT)<br> .location(LOCATION)<br> .project(PROJECT)<br> .publisher(PUBLISHER)<br> .modelName("imagegeneration@002")<br> .sampleImageSize(4096)<br> .withPersisting()<br> .persistTo(defaultTempDirPath)<br> .maxRetries(3)<br> .build();<br><br>Response<Image> upscaledImageResponse =<br> imagenModelForUpscaling.edit(imageResponse.content(), "");</pre><p>And now you have a much bigger image!</p><h3>Conclusion</h3><p>That’s about it for image generation and editing with <strong>Imagen</strong> in <strong>LangChain4j</strong> today! Be sure to use LangChain4j <strong>v0.26.1</strong> which contains that new integration. And I’m looking forward to seeing the pictures you generate with it!</p><p><em>Originally published at </em><a href="https://glaforge.dev/posts/2024/02/01/image-generation-with-imagen-and-langchain4j/"><em>https://glaforge.dev</em></a><em> on February 1, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=61ca08ae6aac" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/image-generation-with-imagen-and-langchain4j-61ca08ae6aac">Image generation with Imagen and LangChain4j</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>'
Language
Active
Ricc internal notes
Imported via /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/import-feedjira.rb on 2024-03-31 23:41:09 +0200. Content is EMPTY here. Entried: title,url,author,categories,published,entry_id,content. TODO add Newspaper: filename = /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/../../../crawler/out/feedjira/Blogs/Guillaume Laforge - Medium/2024-02-01-Image_generation_with_Imagen_and_LangChain4j-v2.yaml
Ricc source
Show this article
Back to articles