Generating replies using Langchain multiple chains and Gemini in NestJS

Reading Time: 7 minutes



In this blog post, I demonstrated generating replies with Langchain multiple chains. In auction sites such as eBay, buyers can provide ratings and comments on sales transactions. When the feedback is negative, the seller must reply promptly to resolve the dispute. This demo aims to generate responses in the same language of the buyer according to the the tone (positive, neutral or negative) and topics. Previous chains obtain answers from the Gemini model and they become the output of the next chain. Similarly, the model receives the new prompt to generate the final reply to keep customers happy.

let's go

Generate Gemini API Key

Go to to generate an API key for a new or an existing Google Cloud project.

Create a new NestJS Project

nest new nestjs-langchain-customer-feedback

Install dependencies

npm i --save-exact @nestjs/swagger @nestjs/throttler dotenv compression helmet @google/generative-ai class-validator class-transformer langchain @langchain/core @langchain/google-genai

Generate a Feedback Module

nest g mo advisoryFeedback
nest g co advisoryFeedback/presenters/http/advisoryFeedback --flat
nest g s advisoryFeedback/application/advisoryFeedback --flat
nest g s advisoryFeedback/application/advisoryFeedbackPromptChainingService --flat

Create a AdvisoryFeedbackModule module, a controller, a service for the API and another service to build chained prompts.

Define Gemini environment variables

// .env.example

GOOGLE_GEMINI_API_KEY=<google gemini api key>

Copy .env.example to .env, and replace GOOGLE_GEMINI_API_KEY and GOOGLE_GEMINI_MODEL with the actual API Key and the Gemini model, respectively.

  • PORT – port number of the NestJS application
  • GOOGLE_GEMINI_MODEL – Google model and I used Gemini 1.5 Pro in this demo

Add .env to the .gitignore file to prevent accidentally committing the Gemini API Key to the GitHub repo.

Add configuration files

The project has 3 configuration files. validate.config.ts validates the payload is valid before any request can route to the controller to execute

// validate.config.ts

import { ValidationPipe } from '@nestjs/common';

export const validateConfig = new ValidationPipe({
  whitelist: true,
  stopAtFirstError: true,
  forbidUnknownValues: false,

env.config.ts extracts the environment variables from process.env and stores the values in the env object.

// env.config.ts

import dotenv from 'dotenv';


export const env = {
  PORT: parseInt(process.env.PORT || '3000'),
    API_KEY: process.env.GOOGLE_GEMINI_API_KEY || '',
    MODEL_NAME: process.env.GOOGLE_GEMINI_MODEL || 'gemini-pro',

throttler.config.ts defines the rate limit of the Translation API

// throttler.config.ts

import { ThrottlerModule } from '@nestjs/throttler';

export const throttlerConfig = ThrottlerModule.forRoot([
    ttl: 60000,
    limit: 10,

Each route allows ten requests in 60,000 milliseconds or 1 minute.

Bootstrap the application

// bootstrap.ts

export class Bootstrap {
  private app: NestExpressApplication;

  async initApp() { = await NestFactory.create(AppModule);

  enableCors() {;

  setupMiddleware() {{ limit: '1000kb' }));{ extended: false }));;;

  setupGlobalPipe() {;

  async startApp() {

  setupSwagger() {
    const config = new DocumentBuilder()
      .setTitle('ESG Advisory Feedback with Langchain multiple chains and Gemini')
      .setDescription('Integrate with Langchain to improve ESG advisory feebacking by prompt chaining')
      .addTag('Langchain, Gemini 1.5 Pro Model, Multiple Chains')
    const document = SwaggerModule.createDocument(, config);
    SwaggerModule.setup('api',, document);

Added a Bootstrap class to setup Swagger, middleware, global validation, CORS, and finally application start.

// main.ts

import { env } from '~configs/env.config';
import { Bootstrap } from '~core/bootstrap';

async function bootstrap() {
  const bootstrap = new Bootstrap();
  await bootstrap.initApp();
  await bootstrap.startApp();

  .then(() => console.log(`The application starts successfully at port ${env.PORT}`))
  .catch((error) => console.error(error));

The bootstrap function enabled CORS, registered middleware to the application, set up Swagger documentation, and used a global pipe to validate payloads.

I have laid down the groundwork and the next step is to add an endpoint to receive payload for generating replies with prompt chaining.

Define Feedback DTO

// feedback.dto.ts

import { IsNotEmpty, IsString } from 'class-validator';

export class FeedbackDto {
  prompt: string;

FeedbackDto accepts a prompt that is the customer feedback.

Construct Gemini Model

// gemini.constant.ts

// gemini-chat-model.provider.ts

export const GeminiChatModelProvider: Provider<ChatGoogleGenerativeAI> = {
  useFactory: () =>
    new ChatGoogleGenerativeAI({
      apiKey: env.GEMINI.API_KEY,
      model: env.GEMINI.MODEL_NAME,
      safetySettings: [
          category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
          threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
          category: HarmCategory.HARM_CATEGORY_HARASSMENT,
          threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
          category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
          threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
          category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
          threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
      temperature: 0.5,
      topK: 10,
      topP: 0.5,
      maxOutputTokens: 2048,

GeminiChatModelProvider is a Gemini model that writes a short reply in the same language of the feedback.

Implement Reply Service

// customer-feedback.type.ts

export type CustomerFeedback = { 
      feedback: string;
// advisory-feedback-prompt-chaining.service.ts

// Omit the import statements 

export class AdvisoryFeedbackPromptChainingService {
  private readonly logger = new Logger(;

  constructor(@Inject(GEMINI_CHAT_MODEL) private model: ChatGoogleGenerativeAI) {}

  private createFindLanguageChain() {
    const languageTemplate = `What is the language of this feedback? 
    When the feedback is written in Traditional Chinese, return Traditional Chinese. When the feedback is written in 
    Simplified Chinese, return Simplified Chinese.
    Please give me the language name, and nothing else. Delete the trailing newline character
    Feedback: {feedback}`;
    const languagePrompt = PromptTemplate.fromTemplate<CustomerFeedback>(languageTemplate);

    return languagePrompt.pipe(this.model).pipe(new StringOutputParser());

  private createTopicChain() {
    const topicTemplate = `What is the topic of this feedback?
    Just the topic and explanation is not needed. Delete the trailing newline character
    Feedback: {feedback}`;
    const topicPrompt = PromptTemplate.fromTemplate<CustomerFeedback>(topicTemplate);

    return topicPrompt.pipe(this.model).pipe(new StringOutputParser());

  private createSentimentChain() {
    const sentimentTemplate = `What is the sentiment of this feedback? No explaination is needed.
    When the sentiment is positive, return 'POSITIVE', is neutral, return 'NEUTRAL', is negative, return 'NEGATIVE'.
    Feedback: {feedback}`;
    const sentimentPrompt = PromptTemplate.fromTemplate<CustomerFeedback>(sentimentTemplate);

    return sentimentPrompt.pipe(this.model).pipe(new StringOutputParser());

  async generateReply(feedback: string): Promise<string> {
    try {
      const chainMap = RunnableMap.from<CustomerFeedback>({
        language: this.createFindLanguageChain(),
        sentiment: this.createSentimentChain(),
        topic: this.createTopicChain(),
        feedback: ({ feedback }) => feedback,

      const replyPrompt =
        PromptTemplate.fromTemplate(`The customer wrote a {sentiment} feedback about {topic} in {language}. Feedback: {feedback}
        Please give a short reply in the same language.`);

      const combinedChain = RunnableSequence.from([chainMap, replyPrompt, this.model, new StringOutputParser()]);

      const response = await combinedChain.invoke({


      return response;
    } catch (ex) {
      throw ex;

AdvisoryFeedbackPromptChainingService injects a chat model in the constructor.

  • model – A chat model to carry out a multi-turn conversation to generate a reply.
  • createFindLanguageChain – a chain to identify the language of the feedback.
  • createSentimentChain – a chain to determine the sentiment (POSITIVE, NEUTRAL, NEGATIVE) of the feedback.
  • createTopicChain – a chain to determine the topics of the feedback.
  • generateReply – this method executed multiple chains in parallel and the outputs became the inputs of the replyPrompt. Then, the combinedChain invoked the replyPrompt to generate replies in the same language based on sentiment and topics.

The process for generating replies ended by producing the text output from generateReplyThe method asked questions concurrently and wrote a descriptive prompt for the LLM to draft a polite reply that addressed the customer's needs.

// advisory-feedback.service.ts

// Omit the import statements to save space

export class AdvisoryFeedbackService {
  constructor(private promptChainingService: AdvisoryFeedbackPromptChainingService) {}

  generateReply(prompt: string): Promise<string> {
    return this.promptChainingService.generateReply(prompt);

AdvisoryFeedbackService injects AdvisoryFeedbackPromptChainingService and constructs multiple chains to ask the chat model to generate a reply.

Implement Advisory Feedback Controller

// advisory-feedback.controller.ts

// Omit the import statements to save space

export class AdvisoryFeedbackController {
  constructor(private service: AdvisoryFeedbackService) {}

  generateReply(@Body() dto: FeedbackDto): Promise<string> {
    return this.service.generateReply(dto.prompt);

The AdvisoryFeedbackController injects AdvisoryFeedbackService using Langchain and Gemini 1.5 Pro model. The endpoint invokes the method to generate a reply from the prompt.

  • /esg-advisory-feedback – generate a reply from a prompt

Module Registration

The AdvisoryFeedbackModule provides AdvisoryFeedbackPromptChainingService, AdvisoryFeedbackService and GeminiChatModelProvider. The module has one controller that is AdvisoryFeedbackController.

// advisory-feedback.module.ts

// Omit the import statements due to brevity reason 

  controllers: [AdvisoryFeedbackController],
  providers: [GeminiChatModelProvider, AdvisoryFeedbackService, AdvisoryFeedbackPromptChainingService],
export class AdvisoryFeedbackModule {}

Import AdvisoryFeedbackModule into AppModule.

// app.module.ts

  imports: [throttlerConfig, AdvisoryFeedbackModule],
  controllers: [AppController],
  providers: [
      provide: APP_GUARD,
      useClass: ThrottlerGuard,
export class AppModule {}

Test the endpoints

I can test the endpoints with cURL, Postman or Swagger documentation after launching the application.

npm run start:dev

The URL of the Swagger documentation is http://localhost:3002/api.


curl --location 'http://localhost:3002/esg-advisory-feedback' \
--header 'Content-Type: application/json' \
--data '{
    "prompt": "Looking ahead, the needs of our customers will increasingly be defined by sustainable choices. ESG reporting through diginex has brought us uniformity, transparency and direction. It provides us with a framework to be able to demonstrate to all stakeholders - customers, employees, and investors - what we are doing and to be open and transparent."

Dockerize the application

// .dockerignore


Create a .dockerignore file for Docker to ignore some files and directories.

// Dockerfile

# Use an official Node.js runtime as the base image
FROM node:20-alpine

# Set the working directory in the container

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install the dependencies
RUN npm install

# Copy the rest of the application code to the working directory
COPY . .

# Expose a port (if your application listens on a specific port)

# Define the command to run your application
CMD [ "npm", "run", "start:dev"]

I added the Dockerfile that installed the dependencies, built the NestJS application, and started it at port 3002.

// docker-compose.yaml

version: '3.8'

      context: .
      dockerfile: Dockerfile
      - PORT=${PORT}
      - "${PORT}:${PORT}"
      - ai
    restart: unless-stopped

I added the docker-compose.yaml in the current folder, which was responsible for creating the NestJS application container.

Launch the Docker application

docker-compose up

Navigate to http://localhost:3002/api to read and execute the API.

This concludes my blog post about using Langchain multiple chains and Gemini 1.5 Pro model to tackle generating replies regardless the written languages. Generating replies with multiple chains reduces the efforts that a writer needs to compose a polite reply to any customer. I only scratched the surface of Langchain and Gemini because Langchain integrates with many LLMs to create chatbots, RAG, and text embeddings applications. I hope you like the content and continue to follow my learning experience in Angular, NestJS, Generative AI, and other technologies.


  1. Github Repo:
  2. Multiple Chains Cookbook:
  3. Langchain Runnable Maps:
  4. Langchain Multiple Chains Simply Explained: