跳转到主要内容

标签(标签)

资源精选(342) Go开发(108) Go语言(103) Go(99) angular(82) LLM(75) 大语言模型(63) 人工智能(53) 前端开发(50) LangChain(43) golang(43) 机器学习(39) Go工程师(38) Go程序员(38) Go开发者(36) React(33) Go基础(29) Python(24) Vue(22) Web开发(20) Web技术(19) 精选资源(19) 深度学习(19) Java(18) ChatGTP(17) Cookie(16) android(16) 前端框架(13) JavaScript(13) Next.js(12) 安卓(11) typescript(10) 资料精选(10) NLP(10) 第三方Cookie(9) Redwoodjs(9) LLMOps(9) Go语言中级开发(9) 自然语言处理(9) 聊天机器人(9) PostgreSQL(9) 区块链(9) mlops(9) 安全(9) 全栈开发(8) ChatGPT(8) OpenAI(8) Linux(8) AI(8) GraphQL(8) iOS(8) 软件架构(7) Go语言高级开发(7) AWS(7) C++(7) 数据科学(7) whisper(6) Prisma(6) 隐私保护(6) RAG(6) JSON(6) DevOps(6) 数据可视化(6) wasm(6) 计算机视觉(6) 算法(6) Rust(6) 微服务(6) 隐私沙盒(5) FedCM(5) 语音识别(5) Angular开发(5) 快速应用开发(5) 提示工程(5) Agent(5) LLaMA(5) 低代码开发(5) Go测试(5) gorm(5) REST API(5) 推荐系统(5) WebAssembly(5) GameDev(5) CMS(5) CSS(5) machine-learning(5) 机器人(5) 游戏开发(5) Blockchain(5) Web安全(5) Kotlin(5) 低代码平台(5) 机器学习资源(5) Go资源(5) Nodejs(5) PHP(5) Swift(5) 智能体(4) devin(4) Blitz(4) javascript框架(4) Redwood(4) GDPR(4) 生成式人工智能(4) Angular16(4) Alpaca(4) SAML(4) JWT(4) JSON处理(4) Go并发(4) kafka(4) 移动开发(4) 移动应用(4) security(4) 隐私(4) spring-boot(4) 物联网(4) nextjs(4) 网络安全(4) API(4) Ruby(4) 信息安全(4) flutter(4) 专家智能体(3) Chrome(3) CHIPS(3) 3PC(3) SSE(3) 人工智能软件工程师(3) LLM Agent(3) Remix(3) Ubuntu(3) GPT4All(3) 软件开发(3) 问答系统(3) 开发工具(3) 最佳实践(3) RxJS(3) SSR(3) Node.js(3) Dolly(3) 移动应用开发(3) 编程语言(3) 低代码(3) IAM(3) Web框架(3) CORS(3) 基准测试(3) Go语言数据库开发(3) Oauth2(3) 并发(3) 主题(3) Theme(3) earth(3) nginx(3) 软件工程(3) azure(3) keycloak(3) 生产力工具(3) gpt3(3) 工作流(3) C(3) jupyter(3) 认证(3) prometheus(3) GAN(3) Spring(3) 逆向工程(3) 应用安全(3) Docker(3) Django(3) R(3) .NET(3) 大数据(3) Hacking(3) 渗透测试(3) C++资源(3) Mac(3) 微信小程序(3) Python资源(3) JHipster(3) 大型语言模型(2) 语言模型(2) 可穿戴设备(2) JDK(2) SQL(2) Apache(2) Hashicorp Vault(2) Spring Cloud Vault(2) Go语言Web开发(2) Go测试工程师(2) WebSocket(2) 容器化(2) AES(2) 加密(2) 输入验证(2) ORM(2) Fiber(2) Postgres(2) Gorilla Mux(2) Go数据库开发(2) 模块(2) 泛型(2) 指针(2) HTTP(2) PostgreSQL开发(2) Vault(2) K8s(2) Spring boot(2) R语言(2) 深度学习资源(2) 半监督学习(2) semi-supervised-learning(2) architecture(2) 普罗米修斯(2) 嵌入模型(2) productivity(2) 编码(2) Qt(2) 前端(2) Rust语言(2) NeRF(2) 神经辐射场(2) 元宇宙(2) CPP(2) 数据分析(2) spark(2) 流处理(2) Ionic(2) 人体姿势估计(2) human-pose-estimation(2) 视频处理(2) deep-learning(2) kotlin语言(2) kotlin开发(2) burp(2) Chatbot(2) npm(2) quantum(2) OCR(2) 游戏(2) game(2) 内容管理系统(2) MySQL(2) python-books(2) pentest(2) opengl(2) IDE(2) 漏洞赏金(2) Web(2) 知识图谱(2) PyTorch(2) 数据库(2) reverse-engineering(2) 数据工程(2) swift开发(2) rest(2) robotics(2) ios-animation(2) 知识蒸馏(2) 安卓开发(2) nestjs(2) solidity(2) 爬虫(2) 面试(2) 容器(2) C++精选(2) 人工智能资源(2) Machine Learning(2) 备忘单(2) 编程书籍(2) angular资源(2) 速查表(2) cheatsheets(2) SecOps(2) mlops资源(2) R资源(2) DDD(2) 架构设计模式(2) 量化(2) Hacking资源(2) 强化学习(2) flask(2) 设计(2) 性能(2) Sysadmin(2) 系统管理员(2) Java资源(2) 机器学习精选(2) android资源(2) android-UI(2) Mac资源(2) iOS资源(2) Vue资源(2) flutter资源(2) JavaScript精选(2) JavaScript资源(2) Rust开发(2) deeplearning(2) RAD(2)

category

A step-by-step guide to integrating Google Gemini into your Angular applications.

In this post, you are going to learn how to access Gemini APIs to create the next generation of AI-enabled Applications using Angular.

We’ll build a simple application to test Gemini Pro and Gemini Pro Visual via the official client. As a bonus, we will also show how you can use Vertex AI via its REST API.

Here’s what we’ll cover:

 

Google Gemini简介

Google Gemini是一个大型语言模型(LLM)家族,提供由Google AI创建的最先进的AI功能。Gemini模型包括:

  • Gemini Ultra。最大、最强大的模型在编码、逻辑推理和创造性协作等复杂任务中表现出色。可通过Gemini Advanced(前身为巴德)获得。
  • Gemini Pro。针对多种任务进行了优化的中型型号,其性能可与Ultra相媲美。通过Gemini聊天机器人、谷歌工作区和谷歌云提供。Gemini Pro 1.5的性能得到了提升,包括在长上下文理解方面取得了突破,可以理解多达一百万个令牌,包括文本、代码、图像、音频和视频。
  • Gemini Nano。一种专为设备上使用而设计的轻量级模型,为手机和小型设备带来了人工智能功能。适用于Pixel 8和三星S24系列。
  • Gemini。受Gemini启发的开源模型,以更小的尺寸提供最先进的性能,并在设计时考虑了负责任的人工智能原则。

从Google AI Studio获取API密钥

访问aistudio.google.com并创建一个API密钥。如果您不在美国,您可以使用全球可用的Vertex AI或使用VPN服务。

Use the API Key together with Google AI JavaScript SDK or the REST API.

Creating the Angular Application

Use the Angular CLI to generate a new application:

ng new google-ai-gemini-angular

This scaffolds a new project with the latest Angular version.

Setting up the project

Run this command to add a new environment:

ng g environments

This will create the following files for development and production:

src/environments/environment.development.ts
src/environments/environment.ts

Edit the development file to include your API Key:

 

// src/environments/environment.development.ts
export const environment = {
 API_KEY: "<YOUR-API-KEY>",
};

Google AI JavaScript SDK

This is the official client to access Gemini models. We will use it to:

  • Generate text from text-only input (text)
  • Generate text from text-and-images input (multimodal)
  • Build multi-turn conversations (chat)
  • Generate content as is created using streaming (stream)

Add this package to your project:

npm install @google/generative-ai

Initialise your model

Before calling Gemini we need to go through the model initialisation. This includes the following steps:

  1. Initialising GoogleGenerativeAI client with your API key.
  2. Choosing a Gemini model: gemini-pro or gemini-pro-vision.
  3. Setting up model parameters including safetySettings, temperature, top_p, top_k and maxOutputTokens.

 

import {
 GoogleGenerativeAI, HarmBlockThreshold, HarmCategory 
} from '@google/generative-ai';
...
const genAI = new GoogleGenerativeAI(environment.API_KEY);
const generationConfig = {
 safetySettings: [
   {
     category: HarmCategory.HARM_CATEGORY_HARASSMENT,
     threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
   },
 ],
 temperature: 0.9,
 top_p: 1,
 top_k: 32,
 maxOutputTokens: 100, // limit output
};
const model = genAI.getGenerativeModel({
 model: 'gemini-pro', // or 'gemini-pro-vision'
 ...generationConfig,
});
...

For Safety settings you can use the defaults (block medium or high) or adjust them to your needs. In the example, we increased the threshold for harassment to block outputs with low probability or above to be unsafe. You can find a more detailed explanation here.

These are all the models available and their default settings. Theres a limit of 60 request per minute. You can learn more about model parameters here.

Generate text from text-only input (text)

Below you can see a code snippet demonstrating Gemini Pro with a text-only input.

async TestGeminiProVisionImages() {
 try {
   let imageBase64 = await this.fileConversionService.convertToBase64(
     'assets/baked_goods_2.jpeg'
   );
 
   // Check for successful conversion to Base64
   if (typeof imageBase64 !== 'string') {
     console.error('Image conversion to Base64 failed.');
     return;
   }
   // Model initialisation missing for brevity
   let prompt = [
     {
       inlineData: {
         mimeType: 'image/jpeg',
         data: imageBase64,
       },
     },
     {
       text: 'Provide a recipe.',
     },
   ];
   const result = await model.generateContent(prompt);
   const response = await result.response;
   console.log(response.text());
 } catch (error) {
   console.error('Error converting file to Base64', error);
 }
}

Generate text from text-and-images input (multimodal)

This example demonstrates how to use Gemini Pro Vision with text and images as input. We are using an image in src/assets for convenience.

async TestGeminiProVisionImages() {
try {
let imageBase64 = await this.fileConversionService.convertToBase64(
'assets/baked_goods_2.jpeg'
);

// Check for successful conversion to Base64
if (typeof imageBase64 !== 'string') {
console.error('Image conversion to Base64 failed.');
return;
}
// Model initialisation missing for brevity
let prompt = [
{
inlineData: {
mimeType: 'image/jpeg',
data: imageBase64,
},
},
{
text: 'Provide a recipe.',
},
];
const result = await model.generateContent(prompt);
const response = await result.response;
console.log(response.text());
} catch (error) {
console.error('Error converting file to Base64', error);
}
}

In order to convert the input image into Base64 you can use the FileConversionService below or an external library.

// file-conversion.service.ts
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { firstValueFrom } from 'rxjs';

@Injectable({
providedIn: 'root',
})
export class FileConversionService {
constructor(private http: HttpClient) {}
async convertToBase64(filePath: string): Promise<string | ArrayBuffer | null> {
const blob = await firstValueFrom(this.http.get(filePath, { responseType: 'blob' }));
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onloadend = () => {
const base64data = reader.result as string;
resolve(base64data.substring(base64data.indexOf(',') + 1)); // Extract only the Base64 data
};
reader.onerror = error => {
reject(error);
};
reader.readAsDataURL(blob);
});
}
}

Image requirements for Gemini:

  • Supported MIME types: image/png, image/jpeg, image/webp, image/heic and image/heif.
  • Maximum of 16 images.
  • Maximum of 4MB including images and text.
  • Large images are scaled down to fit 3072 x 3072 pixels while preserving their original aspect ratio.

Build multi-turn conversations (chat)

This example shows how to use Gemini Pro to build a multi-turn conversation.

async TestGeminiProChat() {
// Model initialisation missing for brevity

const chat = model.startChat({
history: [
{
role: "user",
parts: "Hi there!",
},
{
role: "model",
parts: "Great to meet you. What would you like to know?",
},
],
generationConfig: {
maxOutputTokens: 100,
},
});
const prompt = 'What is the largest number with a name? Brief answer.';
const result = await chat.sendMessage(prompt);
const response = await result.response;
console.log(response.text());
}

You can use the initial user message in the history as a system prompt. Just remember to include a model response acknowledging the instructions. Example: User: adopt the role and writing style of a pirate. Don’t lose character. Reply understood if you understand these instructions. Model: Understood.

Generate content as is created using streaming (stream)

This example demonstrates how to use Gemini Pro to generate content using streaming.

async TestGeminiProStreaming() {
// Model initialisation missing for brevity

const prompt = {
contents: [
{
role: 'user',
parts: [
{
text: 'Generate a poem.',
},
],
},
],
};
const streamingResp = await model.generateContentStream(prompt);
for await (const item of streamingResp.stream) {
console.log('stream chunk: ' + item.text());
}
console.log('aggregated response: ' + (await streamingResp.response).text());
}

As a result of generateContentStream you will receive and object where you can read each chunk stream as is generated and the final response.

Bonus: generate AI content with Vertex AI via the REST API

As an alternative to the official JavaScript client you can use the Gemini REST API provided by Vertex AI. Vertex AI is a full AI platform made available as a managed service in Google Cloud where you can train and deploy AI models including Gemini.

To secure your REST API access, you need to create an account and get the credentials for your application so only you can access it. Here are the steps:

  1. Sign up for a Google Cloud account and enable billing — this gives you access to Vertex AI.
  2. Create a new project in the Cloud Console. Make note of the project ID.
  3. Enable the Vertex AI API for your project.
  4. Install the gcloud CLI and run gcloud auth print-access-token. Save the printed access token - you’ll use this for authentication.

Once you have the project ID and access token, you’re ready to move on to the Angular app. To verify everything is setup correctly you can try these curl commands.

Edit the development file to include the project ID and access token:

// src/environments/environment.development.ts
export const environment = {
API_KEY: "<YOUR-API-KEY>", // Google AI JavaScript SDK access
PROJECT_ID: "<YOUR-PROJECT-ID>", // Vertex AI access
GCLOUD_AUTH_PRINT_ACCESS_TOKEN: "<YOUR-GCLOUD-AUTH-PRINT-ACCESS-TOKEN>", // Vertex AI access
};

To make requests via the REST API, you need to include the HttpClient provider:

// app.config.ts
import { provideHttpClient } from "@angular/common/http";

export const appConfig: ApplicationConfig = {
providers: [
provideRouter(routes),
provideHttpClient()
]
};

With this import, we can inject HttpClient into any component or service to make web requests.

// app.component.ts
import { HttpClient } from '@angular/common/http';

@Component({
selector: 'app-root',
standalone: true,
...
})
export class AppComponent implements OnInit {
constructor(public http: HttpClient) {}
}

To access Vertex AI via REST API we need to do a bit more work as there’s no client available. Without any help, it takes a bit more effort to build the request and read the response.

async TestGeminiProWithVertexAIViaREST() {
// Docs: https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini#request_body
const prompt = this.buildPrompt('What is the largest number with a name?');
const endpoint = this.buildEndpointUrl(environment.PROJECT_ID);
let headers = this.getAuthHeaders(
environment.GCLOUD_AUTH_PRINT_ACCESS_TOKEN
);

this.http.post(endpoint, prompt, { headers }).subscribe((response: any) => {
console.log(response.candidates?.[0].content.parts[0].text);
});
}

buildPrompt(text: string) {
return {
contents: [
{
role: 'user',
parts: [
{
text: text,
},
],
},
],
safety_settings: {
category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
threshold: 'BLOCK_LOW_AND_ABOVE',
},
generation_config: {
temperature: 0.9,
top_p: 1,
top_k: 32,
max_output_tokens: 100,
},
};
}

buildEndpointUrl(projectId: string) {
const BASE_URL = 'https://us-central1-aiplatform.googleapis.com/';
const API_VERSION = 'v1'; // may be different at this time
const MODEL = 'gemini-pro';
let url = BASE_URL; // base url
url += API_VERSION; // api version
url += '/projects/' + projectId; // project id
url += '/locations/us-central1'; // google cloud region
url += '/publishers/google'; // publisher
url += '/models/' + MODEL; // model
url += ':generateContent'; // action
return url;
}

getAuthHeaders(accessToken: string) {
const headers = new HttpHeaders().set(
'Authorization',
`Bearer ${accessToken}`
);
return headers;
}

Running the code

Uncomment the code you want to test within the ngOnInit body from the GitHub project.

ngOnInit(): void {
// Google AI
this.TestGeminiPro();
//this.TestGeminiProChat();
//this.TestGeminiProVisionImages();
//this.TestGeminiProStreaming();

// Vertex AI
//this.TestGeminiProWithVertexAIViaREST();
}

In order to run the code run this command in the terminal and navigate to localhost:4200.

ng serve

Verifying the response

To verify the response, you can quickly check the Console output in your browser.

console.log(response.text());
The largest number with a name is a googolplex. A googolplex is a 1 followed by 100 zeroes.

Congratulations! You now have access to Gemini capabilities.

Conclusions

By completing this tutorial, you learned:

  • How to obtain an API Key and set up access to Gemini APIs
  • How to call Gemini Pro using text and chat
  • How to handle input images for Gemini Pro Vision
  • Bonus: How to setup and call Gemini using Vertex AI via the REST API
  • How to handle the response and outputs

You now have the foundation to start building AI-powered features like advanced text generation into your Angular apps using Gemini. The complete code is available on GitHub.

Want to see a more complex project?

I built a full blown Gemini Chatbot using Angular Material, ngx-quill and ngx-markdown showcasing text, chat and multimodal capabilities.

Feel free to fork the project here. If you like this project don’t forget to leave a star to show support of my work and to other contributors in the Angular community.

Resources

文章链接

标签