Ver Fonte

提交招商决策分析langGraph后台服务代码

feix0518 há 3 dias atrás
pai
commit
83de51790f
27 ficheiros alterados com 2455 adições e 0 exclusões
  1. 194 0
      xinkeaboard-gemini-langgraph_prompt/.gitignore
  2. 63 0
      xinkeaboard-gemini-langgraph_prompt/Dockerfile
  3. 201 0
      xinkeaboard-gemini-langgraph_prompt/LICENSE
  4. 16 0
      xinkeaboard-gemini-langgraph_prompt/Makefile
  5. 113 0
      xinkeaboard-gemini-langgraph_prompt/README.md
  6. 289 0
      xinkeaboard-gemini-langgraph_prompt/api_doc.ipynb
  7. 1 0
      xinkeaboard-gemini-langgraph_prompt/backend/.env.example
  8. 163 0
      xinkeaboard-gemini-langgraph_prompt/backend/.gitignore
  9. 21 0
      xinkeaboard-gemini-langgraph_prompt/backend/LICENSE
  10. 64 0
      xinkeaboard-gemini-langgraph_prompt/backend/Makefile
  11. 43 0
      xinkeaboard-gemini-langgraph_prompt/backend/examples/cli_research.py
  12. 10 0
      xinkeaboard-gemini-langgraph_prompt/backend/langgraph.json
  13. 59 0
      xinkeaboard-gemini-langgraph_prompt/backend/pyproject.toml
  14. 3 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/__init__.py
  15. 71 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/app.py
  16. 60 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/configuration.py
  17. 324 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/graph.py
  18. 139 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/prompts.py
  19. 53 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/run_agent_cli.py
  20. 48 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/state.py
  21. 6 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/test.py
  22. 23 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/tools_and_schemas.py
  23. 166 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/agent/utils.py
  24. 182 0
      xinkeaboard-gemini-langgraph_prompt/backend/src/minesweeper.py
  25. 38 0
      xinkeaboard-gemini-langgraph_prompt/backend/test-agent.ipynb
  26. 44 0
      xinkeaboard-gemini-langgraph_prompt/docker-compose.yml
  27. 61 0
      xinkeaboard-gemini-langgraph_prompt/example.md

+ 194 - 0
xinkeaboard-gemini-langgraph_prompt/.gitignore

@@ -0,0 +1,194 @@
+# Logs
+logs
+*.log
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+pnpm-debug.log*
+lerna-debug.log*
+
+# OS generated files
+.DS_Store
+.DS_Store?
+._*
+.Spotlight-V100
+.Trashes
+ehthumbs.db
+Thumbs.db
+
+# IDE files
+.idea/
+.vscode/
+*.suo
+*.ntvs*
+*.njsproj
+*.sln
+*.sw?
+
+# Optional backend venv (if created in root)
+#.venv/ 
+
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+uv.lock
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+#  Usually these files are written by a python script from a template
+#  before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+#   For a library or package, you might want to ignore these files since the code is
+#   intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+#   However, in case of collaboration, if having platform-specific dependencies or dependencies
+#   having no cross-platform support, pipenv may install dependencies that don't work, or not
+#   install all needed dependencies.
+#Pipfile.lock
+
+# poetry
+#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
+#   This is especially recommended for binary packages to ensure reproducibility, and is more
+#   commonly ignored for libraries.
+#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
+#poetry.lock
+
+# pdm
+#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
+#pdm.lock
+#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
+#   in version control.
+#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control
+.pdm.toml
+.pdm-python
+.pdm-build/
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
+#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
+#  and can be added to the global gitignore or merged into this file.  For a more nuclear
+#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
+#.idea/
+
+backend/.langgraph_api

+ 63 - 0
xinkeaboard-gemini-langgraph_prompt/Dockerfile

@@ -0,0 +1,63 @@
+# Stage 1: Build React Frontend
+FROM node:20-alpine AS frontend-builder
+
+# Set working directory for frontend
+WORKDIR /app/frontend
+
+# Copy frontend package files and install dependencies
+COPY frontend/package.json ./
+COPY frontend/package-lock.json ./
+# If you use yarn or pnpm, adjust accordingly (e.g., copy yarn.lock or pnpm-lock.yaml and use yarn install or pnpm install)
+RUN npm install
+
+# Copy the rest of the frontend source code
+COPY frontend/ ./
+
+# Build the frontend
+RUN npm run build
+
+# Stage 2: Python Backend
+FROM docker.io/langchain/langgraph-api:3.11
+
+# -- Install UV --
+# First install curl, then install UV using the standalone installer
+RUN apt-get update && apt-get install -y curl && \
+    curl -LsSf https://astral.sh/uv/install.sh | sh && \
+    apt-get clean && rm -rf /var/lib/apt/lists/*
+ENV PATH="/root/.local/bin:$PATH"
+# -- End of UV installation --
+
+# -- Copy built frontend from builder stage --
+# The app.py expects the frontend build to be at ../frontend/dist relative to its own location.
+# If app.py is at /deps/backend/src/agent/app.py, then ../frontend/dist resolves to /deps/frontend/dist.
+COPY --from=frontend-builder /app/frontend/dist /deps/frontend/dist
+# -- End of copying built frontend --
+
+# -- Adding local package . --
+ADD backend/ /deps/backend
+# -- End of local package . --
+
+# -- Installing all local dependencies using UV --
+# First, we need to ensure pip is available for UV to use
+RUN uv pip install --system pip setuptools wheel
+# Install dependencies with UV, respecting constraints
+RUN cd /deps/backend && \
+    PYTHONDONTWRITEBYTECODE=1 UV_SYSTEM_PYTHON=1 uv pip install --system -c /api/constraints.txt -e .
+# -- End of local dependencies install --
+ENV LANGGRAPH_HTTP='{"app": "/deps/backend/src/agent/app.py:app"}'
+ENV LANGSERVE_GRAPHS='{"agent": "/deps/backend/src/agent/graph.py:graph"}'
+
+# -- Ensure user deps didn't inadvertently overwrite langgraph-api
+# Create all required directories that the langgraph-api package expects
+RUN mkdir -p /api/langgraph_api /api/langgraph_runtime /api/langgraph_license /api/langgraph_storage && \
+    touch /api/langgraph_api/__init__.py /api/langgraph_runtime/__init__.py /api/langgraph_license/__init__.py /api/langgraph_storage/__init__.py
+# Use pip for this specific package as it has poetry-based build requirements
+RUN PYTHONDONTWRITEBYTECODE=1 pip install --no-cache-dir --no-deps -e /api
+# -- End of ensuring user deps didn't inadvertently overwrite langgraph-api --
+# -- Removing pip from the final image (but keeping UV) --
+RUN uv pip uninstall --system pip setuptools wheel && \
+    rm -rf /usr/local/lib/python*/site-packages/pip* /usr/local/lib/python*/site-packages/setuptools* /usr/local/lib/python*/site-packages/wheel* && \
+    find /usr/local/bin -name "pip*" -delete
+# -- End of pip removal --
+
+WORKDIR /deps/backend

+ 201 - 0
xinkeaboard-gemini-langgraph_prompt/LICENSE

@@ -0,0 +1,201 @@
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

+ 16 - 0
xinkeaboard-gemini-langgraph_prompt/Makefile

@@ -0,0 +1,16 @@
+.PHONY: help dev-frontend dev-backend dev
+
+help:
+	@echo "Available commands:"
+	@echo "  make dev-frontend    - Starts the frontend development server (Vite)"
+	@echo "  make dev-backend     - Starts the backend development server (Uvicorn with reload)"
+	@echo "  make dev             - Starts both frontend and backend development servers"
+
+dev-backend:
+	@echo "Starting backend development server..."
+	@cd backend && langgraph dev
+
+# Run frontend and backend concurrently
+dev:
+	@echo "Starting backend development servers..."
+	@make dev-backend

+ 113 - 0
xinkeaboard-gemini-langgraph_prompt/README.md

@@ -0,0 +1,113 @@
+cd # Gemini Fullstack LangGraph Quickstart
+
+This project demonstrates a fullstack application using a React frontend and a LangGraph-powered backend agent. The agent is designed to perform comprehensive research on a user's query by dynamically generating search terms, querying the web using Google Search, reflecting on the results to identify knowledge gaps, and iteratively refining its search until it can provide a well-supported answer with citations. This application serves as an example of building research-augmented conversational AI using LangGraph and Google's Gemini models.
+
+<img src="./app.png" title="Gemini Fullstack LangGraph" alt="Gemini Fullstack LangGraph" width="90%">
+
+## Features
+
+- 💬 Fullstack application with a React frontend and LangGraph backend.
+- 🧠 Powered by a LangGraph agent for advanced research and conversational AI.
+- 🔍 Dynamic search query generation using Google Gemini models.
+- 🌐 Integrated web research via Google Search API.
+- 🤔 Reflective reasoning to identify knowledge gaps and refine searches.
+- 📄 Generates answers with citations from gathered sources.
+- 🔄 Hot-reloading for both frontend and backend during development.
+
+## Project Structure
+
+The project is divided into two main directories:
+
+-   `frontend/`: Contains the React application built with Vite.
+-   `backend/`: Contains the LangGraph/FastAPI application, including the research agent logic.
+
+## Getting Started: Development and Local Testing
+
+Follow these steps to get the application running locally for development and testing.
+
+**1. Prerequisites:**
+
+-   Node.js and npm (or yarn/pnpm)
+-   Python 3.11+
+-   **`GEMINI_API_KEY`**: The backend agent requires a Google Gemini API key.
+    1.  Navigate to the `backend/` directory.
+    2.  Create a file named `.env` by copying the `backend/.env.example` file.
+    3.  Open the `.env` file and add your Gemini API key: `GEMINI_API_KEY="YOUR_ACTUAL_API_KEY"`
+
+**2. Install Dependencies:**
+
+**Backend:**
+
+```bash
+cd backend
+pip install .
+```
+
+**3. Run Development Servers:**
+
+**Backend & Frontend:**
+
+```bash
+make dev
+```
+This will run the backend and frontend development servers.    Open your browser and navigate to the frontend development server URL (e.g., `http://localhost:5173/app`).
+
+_Alternatively, you can run the backend and frontend development servers separately. For the backend, open a terminal in the `backend/` directory and run `langgraph dev`. The backend API will be available at `http://127.0.0.1:2024`. It will also open a browser window to the LangGraph UI. For the frontend, open a terminal in the `frontend/` directory and run `npm run dev`. The frontend will be available at `http://localhost:5173`._
+
+## How the Backend Agent Works (High-Level)
+
+The core of the backend is a LangGraph agent defined in `backend/src/agent/graph.py`. It follows these steps:
+
+<img src="./agent.png" title="Agent Flow" alt="Agent Flow" width="50%">
+
+1.  **Generate Initial Queries:** Based on your input, it generates a set of initial search queries using a Gemini model.
+2.  **Web Research:** For each query, it uses the Gemini model with the Google Search API to find relevant web pages.
+3.  **Reflection & Knowledge Gap Analysis:** The agent analyzes the search results to determine if the information is sufficient or if there are knowledge gaps. It uses a Gemini model for this reflection process.
+4.  **Iterative Refinement:** If gaps are found or the information is insufficient, it generates follow-up queries and repeats the web research and reflection steps (up to a configured maximum number of loops).
+5.  **Finalize Answer:** Once the research is deemed sufficient, the agent synthesizes the gathered information into a coherent answer, including citations from the web sources, using a Gemini model.
+
+## CLI Example
+
+For quick one-off questions you can execute the agent from the command line. The
+script `backend/examples/cli_research.py` runs the LangGraph agent and prints the
+final answer:
+
+```bash
+cd backend
+python examples/cli_research.py "What are the latest trends in renewable energy?"
+```
+
+
+## Deployment
+
+In production, the backend server serves the optimized static frontend build. LangGraph requires a Redis instance and a Postgres database. Redis is used as a pub-sub broker to enable streaming real time output from background runs. Postgres is used to store assistants, threads, runs, persist thread state and long term memory, and to manage the state of the background task queue with 'exactly once' semantics. For more details on how to deploy the backend server, take a look at the [LangGraph Documentation](https://langchain-ai.github.io/langgraph/concepts/deployment_options/). Below is an example of how to build a Docker image that includes the optimized frontend build and the backend server and run it via `docker-compose`.
+
+_Note: For the docker-compose.yml example you need a LangSmith API key, you can get one from [LangSmith](https://smith.langchain.com/settings)._
+
+_Note: If you are not running the docker-compose.yml example or exposing the backend server to the public internet, you should update the `apiUrl` in the `frontend/src/App.tsx` file to your host. Currently the `apiUrl` is set to `http://localhost:8123` for docker-compose or `http://localhost:2024` for development._
+
+**1. Build the Docker Image:**
+
+   Run the following command from the **project root directory**:
+   ```bash
+   docker build -t gemini-fullstack-langgraph -f Dockerfile .
+   ```
+**2. Run the Production Server:**
+
+   ```bash
+   GEMINI_API_KEY=<your_gemini_api_key> LANGSMITH_API_KEY=<your_langsmith_api_key> docker-compose up
+   ```
+
+Open your browser and navigate to `http://localhost:8123/app/` to see the application. The API will be available at `http://localhost:8123`.
+
+## Technologies Used
+
+- [React](https://reactjs.org/) (with [Vite](https://vitejs.dev/)) - For the frontend user interface.
+- [Tailwind CSS](https://tailwindcss.com/) - For styling.
+- [Shadcn UI](https://ui.shadcn.com/) - For components.
+- [LangGraph](https://github.com/langchain-ai/langgraph) - For building the backend research agent.
+- [Google Gemini](https://ai.google.dev/models/gemini) - LLM for query generation, reflection, and answer synthesis.
+
+## License
+
+This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details. 

Diff do ficheiro suprimidas por serem muito extensas
+ 289 - 0
xinkeaboard-gemini-langgraph_prompt/api_doc.ipynb


+ 1 - 0
xinkeaboard-gemini-langgraph_prompt/backend/.env.example

@@ -0,0 +1 @@
+# GEMINI_API_KEY=

+ 163 - 0
xinkeaboard-gemini-langgraph_prompt/backend/.gitignore

@@ -0,0 +1,163 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+uv.lock
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+#  Usually these files are written by a python script from a template
+#  before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+#   For a library or package, you might want to ignore these files since the code is
+#   intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+#   However, in case of collaboration, if having platform-specific dependencies or dependencies
+#   having no cross-platform support, pipenv may install dependencies that don't work, or not
+#   install all needed dependencies.
+#Pipfile.lock
+
+# poetry
+#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
+#   This is especially recommended for binary packages to ensure reproducibility, and is more
+#   commonly ignored for libraries.
+#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
+#poetry.lock
+
+# pdm
+#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
+#pdm.lock
+#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
+#   in version control.
+#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control
+.pdm.toml
+.pdm-python
+.pdm-build/
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
+#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
+#  and can be added to the global gitignore or merged into this file.  For a more nuclear
+#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
+#.idea/

+ 21 - 0
xinkeaboard-gemini-langgraph_prompt/backend/LICENSE

@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2025 Philipp Schmid
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.

+ 64 - 0
xinkeaboard-gemini-langgraph_prompt/backend/Makefile

@@ -0,0 +1,64 @@
+.PHONY: all format lint test tests test_watch integration_tests docker_tests help extended_tests
+
+# Default target executed when no arguments are given to make.
+all: help
+
+# Define a variable for the test file path.
+TEST_FILE ?= tests/unit_tests/
+
+test:
+	uv run --with-editable . pytest $(TEST_FILE)
+
+test_watch:
+	uv run --with-editable . ptw --snapshot-update --now . -- -vv tests/unit_tests
+
+test_profile:
+	uv run --with-editable . pytest -vv tests/unit_tests/ --profile-svg
+
+extended_tests:
+	uv run --with-editable . pytest --only-extended $(TEST_FILE)
+
+
+######################
+# LINTING AND FORMATTING
+######################
+
+# Define a variable for Python and notebook files.
+PYTHON_FILES=src/
+MYPY_CACHE=.mypy_cache
+lint format: PYTHON_FILES=.
+lint_diff format_diff: PYTHON_FILES=$(shell git diff --name-only --diff-filter=d main | grep -E '\.py$$|\.ipynb$$')
+lint_package: PYTHON_FILES=src
+lint_tests: PYTHON_FILES=tests
+lint_tests: MYPY_CACHE=.mypy_cache_test
+
+lint lint_diff lint_package lint_tests:
+	uv run ruff check .
+	[ "$(PYTHON_FILES)" = "" ] || uv run ruff format $(PYTHON_FILES) --diff
+	[ "$(PYTHON_FILES)" = "" ] || uv run ruff check --select I $(PYTHON_FILES)
+	[ "$(PYTHON_FILES)" = "" ] || uv run mypy --strict $(PYTHON_FILES)
+	[ "$(PYTHON_FILES)" = "" ] || mkdir -p $(MYPY_CACHE) && uv run mypy --strict $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
+
+format format_diff:
+	uv run ruff format $(PYTHON_FILES)
+	uv run ruff check --select I --fix $(PYTHON_FILES)
+
+spell_check:
+	codespell --toml pyproject.toml
+
+spell_fix:
+	codespell --toml pyproject.toml -w
+
+######################
+# HELP
+######################
+
+help:
+	@echo '----'
+	@echo 'format                       - run code formatters'
+	@echo 'lint                         - run linters'
+	@echo 'test                         - run unit tests'
+	@echo 'tests                        - run unit tests'
+	@echo 'test TEST_FILE=<test_file>   - run all tests in file'
+	@echo 'test_watch                   - run unit tests in watch mode'
+

+ 43 - 0
xinkeaboard-gemini-langgraph_prompt/backend/examples/cli_research.py

@@ -0,0 +1,43 @@
+import argparse
+from langchain_core.messages import HumanMessage
+from agent.graph import graph
+
+
+def main() -> None:
+    """Run the research agent from the command line."""
+    parser = argparse.ArgumentParser(description="Run the LangGraph research agent")
+    parser.add_argument("question", help="Research question")
+    parser.add_argument(
+        "--initial-queries",
+        type=int,
+        default=3,
+        help="Number of initial search queries",
+    )
+    parser.add_argument(
+        "--max-loops",
+        type=int,
+        default=2,
+        help="Maximum number of research loops",
+    )
+    parser.add_argument(
+        "--reasoning-model",
+        default="gemini-2.5-pro-preview-05-06",
+        help="Model for the final answer",
+    )
+    args = parser.parse_args()
+
+    state = {
+        "messages": [HumanMessage(content=args.question)],
+        "initial_search_query_count": args.initial_queries,
+        "max_research_loops": args.max_loops,
+        "reasoning_model": args.reasoning_model,
+    }
+
+    result = graph.invoke(state)
+    messages = result.get("messages", [])
+    if messages:
+        print(messages[-1].content)
+
+
+if __name__ == "__main__":
+    main()

+ 10 - 0
xinkeaboard-gemini-langgraph_prompt/backend/langgraph.json

@@ -0,0 +1,10 @@
+{
+  "dependencies": ["."],
+  "graphs": {
+    "agent": "./src/agent/graph.py:graph"
+  },
+  "http": {
+    "app": "./src/agent/app.py:app"
+  },
+  "env": ".env"
+}

+ 59 - 0
xinkeaboard-gemini-langgraph_prompt/backend/pyproject.toml

@@ -0,0 +1,59 @@
+[project]
+name = "agent"
+version = "0.0.1"
+description = "Backend for the LangGraph agent"
+authors = [
+    { name = "Philipp Schmid", email = "schmidphilipp1995@gmail.com" },
+]
+readme = "README.md"
+license = { text = "MIT" }
+requires-python = ">=3.11,<4.0"
+dependencies = [
+    "langgraph>=0.2.6",
+    "langchain>=0.3.19",
+    "langchain-google-genai",
+    "python-dotenv>=1.0.1",
+    "langgraph-sdk>=0.1.57",
+    "langgraph-cli",
+    "langgraph-api",
+    "fastapi",
+    "google-genai",
+]
+
+
+[project.optional-dependencies]
+dev = ["mypy>=1.11.1", "ruff>=0.6.1"]
+
+[build-system]
+requires = ["setuptools>=73.0.0", "wheel"]
+build-backend = "setuptools.build_meta"
+
+[tool.ruff]
+lint.select = [
+    "E",    # pycodestyle
+    "F",    # pyflakes
+    "I",    # isort
+    "D",    # pydocstyle
+    "D401", # First line should be in imperative mood
+    "T201",
+    "UP",
+]
+lint.ignore = [
+    "UP006",
+    "UP007",
+    # We actually do want to import from typing_extensions
+    "UP035",
+    # Relax the convention by _not_ requiring documentation for every function parameter.
+    "D417",
+    "E501",
+]
+[tool.ruff.lint.per-file-ignores]
+"tests/*" = ["D", "UP"]
+[tool.ruff.lint.pydocstyle]
+convention = "google"
+
+[dependency-groups]
+dev = [
+    "langgraph-cli[inmem]>=0.1.71",
+    "pytest>=8.3.5",
+]

+ 3 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/__init__.py

@@ -0,0 +1,3 @@
+from agent.graph import graph
+
+__all__ = ["graph"]

+ 71 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/app.py

@@ -0,0 +1,71 @@
+# mypy: disable - error - code = "no-untyped-def,misc"
+import pathlib
+from fastapi import FastAPI, Response
+from fastapi.staticfiles import StaticFiles
+from pydantic import BaseModel
+from typing import Optional, List, Dict, Any
+import asyncio
+
+# Import the graph from the agent
+from agent.graph import graph
+# Define the FastAPI app
+app = FastAPI()
+
+class ResearchRequest(BaseModel):
+    """Request model for the research endpoint."""
+    question: str
+    initial_search_query_count: Optional[int] = 3
+    max_research_loops: Optional[int] = 2
+    reasoning_model: Optional[str] = "gemini-2.0-flash"
+
+
+class ResearchResponse(BaseModel):
+    """Response model for the research endpoint."""
+    answer: str
+    sources: List[Dict[str, Any]]
+
+
+@app.post("/api/research", response_model=ResearchResponse)
+async def research(request: ResearchRequest):
+    """Endpoint to perform research using the LangGraph agent.
+
+    Args:
+        request: ResearchRequest containing the question and optional parameters.
+
+    Returns:
+        ResearchResponse with the answer and sources.
+   """
+    # Prepare the input for the agent
+
+
+    input_data = {
+        "messages": [("user", request.question)]
+    }
+
+    # Prepare configuration
+    config = {
+        "configurable": {}
+    }
+
+    # Add optional parameters to configuration if provided
+    if request.initial_search_query_count is not None:
+        config["configurable"]["number_of_initial_queries"] = request.initial_search_query_count
+
+    if request.max_research_loops is not None:
+        config["configurable"]["max_research_loops"] = request.max_research_loops
+
+    if request.reasoning_model is not None:
+        config["configurable"]["answer_model"] = request.reasoning_model
+
+    # Run the agent
+    result = await asyncio.get_event_loop().run_in_executor(
+        None,
+        lambda: graph.invoke(input_data, config)
+    )
+
+    # Extract the answer and sources
+    answer = result["messages"][-1].content if result["messages"] else ""
+    sources = result.get("sources_gathered", [])
+
+    return ResearchResponse(answer=answer, sources=sources)
+

+ 60 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/configuration.py

@@ -0,0 +1,60 @@
+import os
+from pydantic import BaseModel, Field
+from typing import Any, Optional
+
+from langchain_core.runnables import RunnableConfig
+
+
+class Configuration(BaseModel):
+    """The configuration for the agent."""
+
+    query_generator_model: str = Field(
+        default="gemini-2.0-flash",
+        metadata={
+            "description": "The name of the language model to use for the agent's query generation."
+        },
+    )
+
+    reflection_model: str = Field(
+        default="gemini-2.5-flash",
+        metadata={
+            "description": "The name of the language model to use for the agent's reflection."
+        },
+    )
+
+    answer_model: str = Field(
+        default="gemini-2.5-pro",
+        metadata={
+            "description": "The name of the language model to use for the agent's answer."
+        },
+    )
+
+    number_of_initial_queries: int = Field(
+        default=3,
+        metadata={"description": "The number of initial search queries to generate."},
+    )
+
+    max_research_loops: int = Field(
+        default=2,
+        metadata={"description": "The maximum number of research loops to perform."},
+    )
+
+    @classmethod
+    def from_runnable_config(
+        cls, config: Optional[RunnableConfig] = None
+    ) -> "Configuration":
+        """Create a Configuration instance from a RunnableConfig."""
+        configurable = (
+            config["configurable"] if config and "configurable" in config else {}
+        )
+
+        # Get raw values from environment or config
+        raw_values: dict[str, Any] = {
+            name: os.environ.get(name.upper(), configurable.get(name))
+            for name in cls.model_fields.keys()
+        }
+
+        # Filter out None values
+        values = {k: v for k, v in raw_values.items() if v is not None}
+
+        return cls(**values)

+ 324 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/graph.py

@@ -0,0 +1,324 @@
+import os
+
+from agent.tools_and_schemas import SearchQueryList, Reflection
+from dotenv import load_dotenv
+from langchain_core.messages import AIMessage
+from langgraph.types import Send
+from langgraph.graph import StateGraph
+from langgraph.graph import START, END
+from langchain_core.runnables import RunnableConfig
+from google.genai import Client
+
+from agent.state import (
+    OverallState,
+    QueryGenerationState,
+    ReflectionState,
+    WebSearchState,
+)
+from agent.configuration import Configuration
+from agent.prompts import (
+    get_current_date,
+    query_writer_instructions,
+    web_searcher_instructions,
+    reflection_instructions,
+    answer_instructions,
+)
+from langchain_google_genai import ChatGoogleGenerativeAI
+from agent.utils import (
+    get_citations,
+    get_research_topic,
+    insert_citation_markers,
+    resolve_urls,
+)
+import logging
+
+load_dotenv()
+
+logging.basicConfig(
+  level = logging.INFO,  # 设置日志级别
+  format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'  # 设置日志格式
+)
+
+logger = logging.getLogger(__name__)
+
+if os.getenv("GEMINI_API_KEY") is None:
+    raise ValueError("GEMINI_API_KEY is not set")
+
+# Used for Google Search API
+genai_client = Client(api_key=os.getenv("GEMINI_API_KEY"))
+
+
+# Nodes
+def generate_query(state: OverallState, config: RunnableConfig) -> QueryGenerationState:
+    """LangGraph node that generates search queries based on the User's question.
+
+    Uses Gemini 2.0 Flash to create an optimized search queries for web research based on
+    the User's question.
+
+    Args:
+        state: Current graph state containing the User's question
+        config: Configuration for the runnable, including LLM provider settings
+
+    Returns:
+        Dictionary with state update, including search_query key containing the generated queries
+    """
+    configurable = Configuration.from_runnable_config(config)
+    logger.info("开始:generate_query")
+    logger.info("1:generate_query")
+    # check for custom initial search query count
+    if state.get("initial_search_query_count") is None:
+        state["initial_search_query_count"] = configurable.number_of_initial_queries
+    logger.info("2:generate_query")
+    # init Gemini 2.0 Flash
+    llm = ChatGoogleGenerativeAI(
+        model=configurable.query_generator_model,
+        temperature=1.0,
+        max_retries=2,
+        api_key=os.getenv("GEMINI_API_KEY"),
+    )
+    logger.info("3:generate_query")
+    structured_llm = llm.with_structured_output(SearchQueryList)
+    logger.info("4:generate_query")
+    # Format the prompt
+    current_date = get_current_date()
+    formatted_prompt = query_writer_instructions.format(
+        current_date=current_date,
+        research_topic=get_research_topic(state["messages"]),
+        number_queries=state["initial_search_query_count"],
+    )
+    # Generate the search queries
+    # print("formatted_prompt: ", formatted_prompt)
+    result = structured_llm.invoke(formatted_prompt)
+    logger.info("结束:generate_query")
+    return {"search_query": result.query}
+
+
+def continue_to_web_research(state: QueryGenerationState):
+    logger.info("开始:continue_to_web_research")
+    """LangGraph node that sends the search queries to the web research node.
+
+    This is used to spawn n number of web research nodes, one for each search query.
+    """
+    logger.info("开始:continue_to_web_research")
+    return [
+        Send("web_research", {"search_query": search_query, "id": int(idx)})
+        for idx, search_query in enumerate(state["search_query"])
+    ]
+
+
+def web_research(state: WebSearchState, config: RunnableConfig) -> OverallState:
+    """LangGraph node that performs web research using the native Google Search API tool.
+
+    Executes a web search using the native Google Search API tool in combination with Gemini 2.0 Flash.
+
+    Args:
+        state: Current graph state containing the search query and research loop count
+        config: Configuration for the runnable, including search API settings
+
+    Returns:
+        Dictionary with state update, including sources_gathered, research_loop_count, and web_research_results
+    """
+    logger.info("开始:web_research")
+    # Configure
+    configurable = Configuration.from_runnable_config(config)
+    formatted_prompt = web_searcher_instructions.format(
+        current_date=get_current_date(),
+        research_topic=state["search_query"],
+    )
+
+    # Uses the google genai client as the langchain client doesn't return grounding metadata
+    response = genai_client.models.generate_content(
+        model=configurable.query_generator_model,
+        contents=formatted_prompt,
+        config={
+            "tools": [{"google_search": {}}],
+            "temperature": 0,
+        },
+    )
+    # chunks = response.candidates[0].grounding_metadata.grounding_chunks
+    # for chunk in chunks:
+    #     print(chunk["title"])
+    #     print(chunk["url"])
+    #     print(chunk["content"]) 
+    # resolve the urls to short urls for saving tokens and time
+    resolved_urls = resolve_urls(
+        response.candidates[0].grounding_metadata.grounding_chunks, state["id"]
+    )
+    # Gets the citations and adds them to the generated text
+    citations = get_citations(response, resolved_urls)
+    modified_text = insert_citation_markers(response.text, citations)
+    sources_gathered = [item for citation in citations for item in citation["segments"]]
+    logger.info("结束:web_research")
+    return {
+        "sources_gathered": sources_gathered,
+        "search_query": [state["search_query"]],
+        "web_research_result": [modified_text],
+    }
+
+
+def reflection(state: OverallState, config: RunnableConfig) -> ReflectionState:
+    logger.info("开始:reflection")
+    """LangGraph node that identifies knowledge gaps and generates potential follow-up queries.
+
+    Analyzes the current summary to identify areas for further research and generates
+    potential follow-up queries. Uses structured output to extract
+    the follow-up query in JSON format.
+
+    Args:
+        state: Current graph state containing the running summary and research topic
+        config: Configuration for the runnable, including LLM provider settings
+
+    Returns:
+        Dictionary with state update, including search_query key containing the generated follow-up query
+    """
+    configurable = Configuration.from_runnable_config(config)
+    # Increment the research loop count and get the reasoning model
+    state["research_loop_count"] = state.get("research_loop_count", 0) + 1
+    reasoning_model = state.get("reasoning_model", configurable.reflection_model)
+
+    # Format the prompt
+    current_date = get_current_date()
+    formatted_prompt = reflection_instructions.format(
+        current_date=current_date,
+        research_topic=get_research_topic(state["messages"]),
+        summaries="\n\n---\n\n".join(state["web_research_result"]),
+    )
+    # init Reasoning Model
+    llm = ChatGoogleGenerativeAI(
+        model=reasoning_model,
+        temperature=1.0,
+        max_retries=2,
+        api_key=os.getenv("GEMINI_API_KEY"),
+    )
+    result = llm.with_structured_output(Reflection).invoke(formatted_prompt)
+    logger.info("结束:reflection")
+    return {
+        "is_sufficient": result.is_sufficient,
+        "knowledge_gap": result.knowledge_gap,
+        "follow_up_queries": result.follow_up_queries,
+        "research_loop_count": state["research_loop_count"],
+        "number_of_ran_queries": len(state["search_query"]),
+    }
+
+
+def evaluate_research(
+    state: ReflectionState,
+    config: RunnableConfig,
+) -> OverallState:
+    logger.info("开始:evaluate_research")
+    """LangGraph routing function that determines the next step in the research flow.
+
+    Controls the research loop by deciding whether to continue gathering information
+    or to finalize the summary based on the configured maximum number of research loops.
+
+    Args:
+        state: Current graph state containing the research loop count
+        config: Configuration for the runnable, including max_research_loops setting
+
+    Returns:
+        String literal indicating the next node to visit ("web_research" or "finalize_summary")
+    """
+    configurable = Configuration.from_runnable_config(config)
+    max_research_loops = (
+        state.get("max_research_loops")
+        if state.get("max_research_loops") is not None
+        else configurable.max_research_loops
+    )
+    logger.info("结束:evaluate_research")
+    if state["is_sufficient"] or state["research_loop_count"] >= max_research_loops:
+        return "finalize_answer"
+    else:
+        return [
+            Send(
+                "web_research",
+                {
+                    "search_query": follow_up_query,
+                    "id": state["number_of_ran_queries"] + int(idx),
+                },
+            )
+            for idx, follow_up_query in enumerate(state["follow_up_queries"])
+        ]
+
+
+def finalize_answer(state: OverallState, config: RunnableConfig):
+    logger.info("开始:finalize_answer")
+    """LangGraph node that finalizes the research summary.
+
+    Prepares the final output by deduplicating and formatting sources, then
+    combining them with the running summary to create a well-structured
+    research report with proper citations.
+
+    Args:
+        state: Current graph state containing the running summary and sources gathered
+
+    Returns:
+        Dictionary with state update, including running_summary key containing the formatted final summary with sources
+    """
+    configurable = Configuration.from_runnable_config(config)
+    reasoning_model = state.get("reasoning_model") or configurable.answer_model
+
+    # Format the prompt
+    current_date = get_current_date()
+    formatted_prompt = answer_instructions.format(
+        current_date=current_date,
+        research_topic=get_research_topic(state["messages"]),
+        summaries="\n---\n\n".join(state["web_research_result"]),
+    )
+
+    # init Reasoning Model, default to Gemini 2.5 Flash
+    llm = ChatGoogleGenerativeAI(
+        model=reasoning_model,
+        temperature=0,
+        max_retries=2,
+        api_key=os.getenv("GEMINI_API_KEY"),
+    )
+    logger.info("开始:llm.invoke")
+    result = llm.invoke(formatted_prompt)
+    logger.info("结束:llm.invoke:{}",result)
+    # Replace the short urls with the original urls and add all used urls to the sources_gathered
+    unique_sources = []
+    for source in state["sources_gathered"]:
+        if source["short_url"] in result.content:
+            result.content = result.content.replace(
+                source["short_url"], source["value"]
+            )
+            unique_sources.append(source)
+    #save the result to a markdown file
+    # with open(f"result_{get_research_topic(state['messages'])}.md", "w", encoding="utf-8") as f:
+    #     f.write(result.content)
+    # print(f"Result saved to {f.name}")
+    logger.info("结束:finalize_answer")
+    return {
+        "messages": [AIMessage(content=result.content)],
+        "sources_gathered": unique_sources,
+        #save the result to a markdown file
+
+    }
+
+
+# Create our Agent Graph
+builder = StateGraph(OverallState, config_schema=Configuration)
+
+# Define the nodes we will cycle between
+builder.add_node("generate_query", generate_query)
+builder.add_node("web_research", web_research)
+builder.add_node("reflection", reflection)
+builder.add_node("finalize_answer", finalize_answer)
+
+# Set the entrypoint as `generate_query`
+# This means that this node is the first one called
+builder.add_edge(START, "generate_query")
+# Add conditional edge to continue with search queries in a parallel branch
+builder.add_conditional_edges(
+    "generate_query", continue_to_web_research, ["web_research"]
+)
+# Reflect on the web research
+builder.add_edge("web_research", "reflection")
+# Evaluate the research
+builder.add_conditional_edges(
+    "reflection", evaluate_research, ["web_research", "finalize_answer"]
+)
+# Finalize the answer
+builder.add_edge("finalize_answer", END)
+
+graph = builder.compile(name="pro-search-agent")

+ 139 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/prompts.py

@@ -0,0 +1,139 @@
+from datetime import datetime
+
+
+# Get current date in a readable format
+def get_current_date():
+    return datetime.now().strftime("%B %d, %Y")
+
+
+query_writer_instructions = """You will receive a product name and a target country or region.
+
+Your task is to generate diverse, sophisticated web search queries covering multiple aspects of international market intelligence.
+
+The goal is to help an automated agent explore:
+
+1. Competitive Intelligence
+- Major competing brands, sales channels, popular e-commerce platforms, pricing strategies, promotion tactics.
+
+2. Market Signals and Demand
+- Keyword popularity, consumer search behavior, e-commerce growth trends, user preferences, seasonal shifts.
+
+3. Entry Barriers and Risks
+- Regulatory hurdles, economic volatility, dominant competitors, distribution monopolies, geopolitical instability.
+
+4. Market Gaps and Opportunities
+- Underserved segments, unmet customer needs, price voids, platform white space, trend divergence.
+
+
+Instructions:
+- Generate corresponding 4 search queries to comprehensively cover the research topic.
+- Each query should focus on one specific aspect of the original question.
+- Don't produce more than {number_queries} queries.
+- Queries should be diverse, if the topic is broad, generate more than 1 query.
+- Don't generate multiple similar queries, 1 is enough.
+- Query should ensure that the most current information is gathered. The current date is {current_date}.
+
+Format: 
+- Format your response as a JSON object with ALL two of these exact keys:
+   - "rationale": Brief explanation of why these queries are relevant
+   - "query": A list of search queries
+
+Example:
+
+Topic: What revenue grew more last year — Apple stock or the number of people buying an iPhone
+```json
+{{
+    "rationale": "To determine which grew more — Apple stock value or iPhone user base — we need to retrieve annual growth data from Apple's financial reports, stock performance over the year, and global iPhone unit sales or active users. The queries aim to isolate each component's growth for a fair year-over-year comparison.",
+    "query": [
+        "Apple stock price increase percentage 2024",
+        "Apple annual report 2024 iPhone unit sales",
+        "Apple iPhone user growth 2024",
+        "Apple revenue breakdown by product 2024",
+        "iPhone market penetration or user base change 2023 vs 2024"
+    ]
+}}
+```
+
+Context: {research_topic}"""
+
+
+web_searcher_instructions = """Conduct targeted Google Searches to gather the most recent, credible information on "{research_topic}" and synthesize it into a verifiable text artifact.
+
+Instructions:
+- Query should ensure that the most current information is gathered. The current date is {current_date}.
+- Conduct multiple, diverse searches to gather comprehensive information.
+- Consolidate key findings while meticulously tracking the source(s) for each specific piece of information.
+- The output should be a well-written summary or report based on your search findings. 
+- Only include the information found in the search results, don't make up any information.
+
+Research Topic:
+{research_topic}
+"""
+
+reflection_instructions = """You are an expert research assistant analyzing summaries about "{research_topic}".
+
+Instructions:
+- Identify knowledge gaps or areas that need deeper exploration and generate a follow-up query. (1 or multiple).
+- If provided summaries are sufficient to answer the user's question, don't generate a follow-up query.
+- If there is a knowledge gap, generate a follow-up query that would help expand your understanding.
+- Focus on technical details, implementation specifics, or emerging trends that weren't fully covered.
+
+Requirements:
+- Ensure the follow-up query is self-contained and includes necessary context for web search.
+
+Output Format:
+- Format your response as a JSON object with these exact keys:
+   - "is_sufficient": true or false
+   - "knowledge_gap": Describe what information is missing or needs clarification
+   - "follow_up_queries": Write a specific question to address this gap
+
+Example:
+```json
+{{
+    "is_sufficient": true, // or false
+    "knowledge_gap": "The summary lacks information about performance metrics and benchmarks", // "" if is_sufficient is true
+    "follow_up_queries": ["What are typical performance benchmarks and metrics used to evaluate [specific technology]?"] // [] if is_sufficient is true
+}}
+```
+
+Reflect carefully on the Summaries to identify knowledge gaps and produce a follow-up query. Then, produce your output following this JSON format:
+
+Summaries:
+{summaries}
+"""
+
+answer_instructions = """You are generating a high-quality international market report tailored for **Chinese export companies**.
+
+Instructions:
+- The current date is {current_date}.
+- Your task is to synthesize the following summaries into a clear, actionable market intelligence report.
+- The report should help Chinese exporters understand the market and plan their entry/expansion strategy.
+
+Your output must include:
+
+1. **Executive Summary**  
+   - Briefly summarize the current market status for the product in the target country or region.
+
+2. **Market Insights**  
+   - Highlight key findings from the research: macroeconomic trends, trade patterns, consumer preferences, competitors, e-commerce dynamics, and regulatory conditions.
+
+3. **Strategic Recommendations for Chinese Exporters**  
+   - Provide concrete suggestions in the following areas:
+     - **Optimal product combinations**: Which product types, variants, or bundles are more likely to succeed?
+     - **Positioning recommendations**: Value, premium, eco-friendly, smart-tech, etc.
+     - **Pricing strategies**: Competitive pricing bands, opportunities for premium pricing, or price gaps.
+     - **Preferred sales channels**: Offline distribution, e-commerce platforms (e.g., Shopee, Amazon), or cross-border B2B/B2C.
+     - **Bundling or upsell tactics**: Suggestions for bundling with complementary products or services.
+
+4. **Citations**  
+   - Include references for every important insight or data point using markdown links, e.g. [Reuters](https://...).
+
+Your tone should be informative, business-oriented, and suitable for use in a decision-support tool for exporters.
+
+Translate the report into Chinese. The output should be in Chinese, no English should be included.
+
+User Context:
+- {research_topic}
+
+Summaries:
+{summaries}"""

+ 53 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/run_agent_cli.py

@@ -0,0 +1,53 @@
+import argparse
+import sys
+from pathlib import Path
+from typing import Sequence
+
+# This is a workaround to be able to import the 'agent' module.
+# We assume this script is in `backend/src/agent` and we add `backend/src` to the path.
+src_dir = Path(__file__).resolve().parent.parent
+sys.path.insert(0, str(src_dir))
+
+from agent.app import get_agent_executor
+
+
+def main(argv: Sequence[str] | None = None) -> None:
+    """The main function."""
+    parser = argparse.ArgumentParser(
+        description="""
+A script to run the research agent from the command line.
+This script is a command-line interface to the research agent.
+It takes a query as input and prints the research process and the final result.
+
+Example:
+python backend/run_agent_cli.py "What are the latest trends in AI?"
+"""
+    )
+    parser.add_argument("query", help="The query to search for.")
+    args = parser.parse_args(argv)
+    
+    # Get the compiled agent executor
+    app = get_agent_executor()
+
+    print(f"\nRunning agent with query: '{args.query}'\n")
+    print("--- Intermediate Steps ---\n")
+
+    # Stream the execution of the agent
+    for s in app.stream(
+        {"messages": [("user", args.query)]},
+        # The recursion limit is a safety measure to prevent infinite loops.
+        {"recursion_limit": 150},
+    ):
+        if "__end__" not in s:
+            print(s)
+            print("--------------------")
+
+    # To get the final result, we can invoke the agent and get the last message.
+    print("\n--- Final Result ---")
+    final_state = app.invoke({"messages": [("user", args.query)]}, {"recursion_limit": 150})
+    final_result = final_state['messages'][-1].content
+    print(final_result)
+
+
+if __name__ == "__main__":
+    main()

+ 48 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/state.py

@@ -0,0 +1,48 @@
+from __future__ import annotations
+
+from dataclasses import dataclass, field
+from typing import TypedDict
+
+from langgraph.graph import add_messages
+from typing_extensions import Annotated
+
+
+import operator
+
+
+class OverallState(TypedDict):
+    messages: Annotated[list, add_messages]
+    search_query: Annotated[list, operator.add]
+    web_research_result: Annotated[list, operator.add]
+    sources_gathered: Annotated[list, operator.add]
+    initial_search_query_count: int
+    max_research_loops: int
+    research_loop_count: int
+    reasoning_model: str
+
+
+class ReflectionState(TypedDict):
+    is_sufficient: bool
+    knowledge_gap: str
+    follow_up_queries: Annotated[list, operator.add]
+    research_loop_count: int
+    number_of_ran_queries: int
+
+
+class Query(TypedDict):
+    query: str
+    rationale: str
+
+
+class QueryGenerationState(TypedDict):
+    search_query: list[Query]
+
+
+class WebSearchState(TypedDict):
+    search_query: str
+    id: str
+
+
+@dataclass(kw_only=True)
+class SearchStateOutput:
+    running_summary: str = field(default=None)  # Final report

+ 6 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/test.py

@@ -0,0 +1,6 @@
+import agent.prompts
+import sys
+sys.path.insert(0, "/Users/joeychen/Desktop/Weichi_proj/gemini-langgraph_prompt_engineering/backend/src")
+
+print(">>> PROMPT MODULE PATH:", agent.prompts.__file__)
+print(">>> answer_instructions PREVIEW:\n", agent.prompts.answer_instructions[:300])

+ 23 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/tools_and_schemas.py

@@ -0,0 +1,23 @@
+from typing import List
+from pydantic import BaseModel, Field
+
+
+class SearchQueryList(BaseModel):
+    query: List[str] = Field(
+        description="A list of search queries to be used for web research."
+    )
+    rationale: str = Field(
+        description="A brief explanation of why these queries are relevant to the research topic."
+    )
+
+
+class Reflection(BaseModel):
+    is_sufficient: bool = Field(
+        description="Whether the provided summaries are sufficient to answer the user's question."
+    )
+    knowledge_gap: str = Field(
+        description="A description of what information is missing or needs clarification."
+    )
+    follow_up_queries: List[str] = Field(
+        description="A list of follow-up queries to address the knowledge gap."
+    )

+ 166 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/agent/utils.py

@@ -0,0 +1,166 @@
+from typing import Any, Dict, List
+from langchain_core.messages import AnyMessage, AIMessage, HumanMessage
+
+
+def get_research_topic(messages: List[AnyMessage]) -> str:
+    """
+    Get the research topic from the messages.
+    """
+    # check if request has a history and combine the messages into a single string
+    if len(messages) == 1:
+        research_topic = messages[-1].content
+    else:
+        research_topic = ""
+        for message in messages:
+            if isinstance(message, HumanMessage):
+                research_topic += f"User: {message.content}\n"
+            elif isinstance(message, AIMessage):
+                research_topic += f"Assistant: {message.content}\n"
+    return research_topic
+
+
+def resolve_urls(urls_to_resolve: List[Any], id: int) -> Dict[str, str]:
+    """
+    Create a map of the vertex ai search urls (very long) to a short url with a unique id for each url.
+    Ensures each original URL gets a consistent shortened form while maintaining uniqueness.
+    """
+    prefix = f"https://vertexaisearch.cloud.google.com/id/"
+    urls = [site.web.uri for site in urls_to_resolve]
+
+    # Create a dictionary that maps each unique URL to its first occurrence index
+    resolved_map = {}
+    for idx, url in enumerate(urls):
+        if url not in resolved_map:
+            resolved_map[url] = f"{prefix}{id}-{idx}"
+
+    return resolved_map
+
+
+def insert_citation_markers(text, citations_list):
+    """
+    Inserts citation markers into a text string based on start and end indices.
+
+    Args:
+        text (str): The original text string.
+        citations_list (list): A list of dictionaries, where each dictionary
+                               contains 'start_index', 'end_index', and
+                               'segment_string' (the marker to insert).
+                               Indices are assumed to be for the original text.
+
+    Returns:
+        str: The text with citation markers inserted.
+    """
+    # Sort citations by end_index in descending order.
+    # If end_index is the same, secondary sort by start_index descending.
+    # This ensures that insertions at the end of the string don't affect
+    # the indices of earlier parts of the string that still need to be processed.
+    sorted_citations = sorted(
+        citations_list, key=lambda c: (c["end_index"], c["start_index"]), reverse=True
+    )
+
+    modified_text = text
+    for citation_info in sorted_citations:
+        # These indices refer to positions in the *original* text,
+        # but since we iterate from the end, they remain valid for insertion
+        # relative to the parts of the string already processed.
+        end_idx = citation_info["end_index"]
+        marker_to_insert = ""
+        for segment in citation_info["segments"]:
+            marker_to_insert += f" [{segment['label']}]({segment['short_url']})"
+        # Insert the citation marker at the original end_idx position
+        modified_text = (
+            modified_text[:end_idx] + marker_to_insert + modified_text[end_idx:]
+        )
+
+    return modified_text
+
+
+def get_citations(response, resolved_urls_map):
+    """
+    Extracts and formats citation information from a Gemini model's response.
+
+    This function processes the grounding metadata provided in the response to
+    construct a list of citation objects. Each citation object includes the
+    start and end indices of the text segment it refers to, and a string
+    containing formatted markdown links to the supporting web chunks.
+
+    Args:
+        response: The response object from the Gemini model, expected to have
+                  a structure including `candidates[0].grounding_metadata`.
+                  It also relies on a `resolved_map` being available in its
+                  scope to map chunk URIs to resolved URLs.
+
+    Returns:
+        list: A list of dictionaries, where each dictionary represents a citation
+              and has the following keys:
+              - "start_index" (int): The starting character index of the cited
+                                     segment in the original text. Defaults to 0
+                                     if not specified.
+              - "end_index" (int): The character index immediately after the
+                                   end of the cited segment (exclusive).
+              - "segments" (list[str]): A list of individual markdown-formatted
+                                        links for each grounding chunk.
+              - "segment_string" (str): A concatenated string of all markdown-
+                                        formatted links for the citation.
+              Returns an empty list if no valid candidates or grounding supports
+              are found, or if essential data is missing.
+    """
+    citations = []
+
+    # Ensure response and necessary nested structures are present
+    if not response or not response.candidates:
+        return citations
+
+    candidate = response.candidates[0]
+    if (
+        not hasattr(candidate, "grounding_metadata")
+        or not candidate.grounding_metadata
+        or not hasattr(candidate.grounding_metadata, "grounding_supports")
+    ):
+        return citations
+
+    for support in candidate.grounding_metadata.grounding_supports:
+        citation = {}
+
+        # Ensure segment information is present
+        if not hasattr(support, "segment") or support.segment is None:
+            continue  # Skip this support if segment info is missing
+
+        start_index = (
+            support.segment.start_index
+            if support.segment.start_index is not None
+            else 0
+        )
+
+        # Ensure end_index is present to form a valid segment
+        if support.segment.end_index is None:
+            continue  # Skip if end_index is missing, as it's crucial
+
+        # Add 1 to end_index to make it an exclusive end for slicing/range purposes
+        # (assuming the API provides an inclusive end_index)
+        citation["start_index"] = start_index
+        citation["end_index"] = support.segment.end_index
+
+        citation["segments"] = []
+        if (
+            hasattr(support, "grounding_chunk_indices")
+            and support.grounding_chunk_indices
+        ):
+            for ind in support.grounding_chunk_indices:
+                try:
+                    chunk = candidate.grounding_metadata.grounding_chunks[ind]
+                    resolved_url = resolved_urls_map.get(chunk.web.uri, None)
+                    citation["segments"].append(
+                        {
+                            "label": chunk.web.title.split(".")[:-1][0],
+                            "short_url": resolved_url,
+                            "value": chunk.web.uri,
+                        }
+                    )
+                except (IndexError, AttributeError, NameError):
+                    # Handle cases where chunk, web, uri, or resolved_map might be problematic
+                    # For simplicity, we'll just skip adding this particular segment link
+                    # In a production system, you might want to log this.
+                    pass
+        citations.append(citation)
+    return citations

+ 182 - 0
xinkeaboard-gemini-langgraph_prompt/backend/src/minesweeper.py

@@ -0,0 +1,182 @@
+import random
+import os
+
+
+class Minesweeper:
+    def __init__(self, width=10, height=10, mines=10):
+        self.width = width
+        self.height = height
+        self.mines = mines
+        self.board = [[0 for _ in range(width)] for _ in range(height)]
+        self.revealed = [[False for _ in range(width)] for _ in range(height)]
+        self.marked = [[False for _ in range(width)] for _ in range(height)]
+        self.game_over = False
+        self.win = False
+        self.first_click = True
+
+    def place_mines(self, exclude_x, exclude_y):
+        """在除了第一次点击位置之外的地方放置地雷"""
+        positions = [(x, y) for x in range(self.width) for y in range(self.height)]
+        positions.remove((exclude_x, exclude_y))  # 确保第一次点击的位置没有地雷
+        mine_positions = random.sample(positions, self.mines)
+
+        for x, y in mine_positions:
+            self.board[y][x] = -1  # -1 表示地雷
+
+        # 计算每个非地雷格子周围的地雷数
+        for y in range(self.height):
+            for x in range(self.width):
+                if self.board[y][x] == -1:
+                    continue
+                count = 0
+                for dy in [-1, 0, 1]:
+                    for dx in [-1, 0, 1]:
+                        if dx == 0 and dy == 0:
+                            continue
+                        nx, ny = x + dx, y + dy
+                        if 0 <= nx < self.width and 0 <= ny < self.height:
+                            if self.board[ny][nx] == -1:
+                                count += 1
+                self.board[y][x] = count
+
+    def reveal(self, x, y):
+        """揭示一个格子"""
+        if not (0 <= x < self.width and 0 <= y < self.height):
+            return
+        if self.revealed[y][x] or self.marked[y][x]:
+            return
+
+        if self.first_click:
+            self.place_mines(x, y)
+            self.first_click = False
+
+        self.revealed[y][x] = True
+
+        # 如果点击到地雷,游戏结束
+        if self.board[y][x] == -1:
+            self.game_over = True
+            return
+
+        # 如果周围没有地雷,自动揭示周围的格子
+        if self.board[y][x] == 0:
+            for dy in [-1, 0, 1]:
+                for dx in [-1, 0, 1]:
+                    if dx == 0 and dy == 0:
+                        continue
+                    self.reveal(x + dx, y + dy)
+
+        # 检查是否获胜
+        self.check_win()
+
+    def toggle_mark(self, x, y):
+        """标记或取消标记一个格子"""
+        if not (0 <= x < self.width and 0 <= y < self.height):
+            return
+        if self.revealed[y][x]:
+            return
+        self.marked[y][x] = not self.marked[y][x]
+        self.check_win()
+
+    def check_win(self):
+        """检查是否获胜"""
+        for y in range(self.height):
+            for x in range(self.width):
+                # 如果所有非地雷格子都被揭示,则获胜
+                if self.board[y][x] != -1 and not self.revealed[y][x]:
+                    return
+        self.win = True
+        self.game_over = True
+
+    def display(self):
+        """显示游戏板"""
+        # 清屏
+        os.system('cls' if os.name == 'nt' else 'clear')
+
+        # 打印列号
+        print("  ", end="")
+        for x in range(self.width):
+            print(f"{x % 10} ", end="")
+        print()
+
+        # 打印分隔线
+        print("  ", end="")
+        for x in range(self.width):
+            print("- ", end="")
+        print()
+
+        # 打印行
+        for y in range(self.height):
+            print(f"{y % 10}|", end="")  # 行号
+            for x in range(self.width):
+                if self.marked[y][x]:
+                    print("F ", end="")  # 标记
+                elif not self.revealed[y][x]:
+                    print(". ", end="")  # 未揭示
+                elif self.board[y][x] == -1:
+                    print("* ", end="")  # 地雷
+                elif self.board[y][x] == 0:
+                    print("  ", end="")  # 空白
+                else:
+                    print(f"{self.board[y][x]} ", end="")  # 数字
+            print()
+
+    def play(self):
+        """开始游戏"""
+        while not self.game_over:
+            self.display()
+            print("\n命令:")
+            print("  r x y - 揭示格子 (例如: r 3 5)")
+            print("  m x y - 标记/取消标记格子 (例如: m 3 5)")
+            print("  q - 退出游戏")
+
+            try:
+                command = input("\n请输入命令: ").strip().split()
+                if not command:
+                    continue
+
+                if command[0].lower() == 'q':
+                    return
+
+                if len(command) != 3:
+                    print("无效命令!请按照格式输入。")
+                    continue
+
+                action = command[0].lower()
+                x = int(command[1])
+                y = int(command[2])
+
+                if action == 'r':
+                    self.reveal(x, y)
+                elif action == 'm':
+                    self.toggle_mark(x, y)
+                else:
+                    print("无效命令!使用 'r' 揭示或 'm' 标记。")
+
+            except (ValueError, IndexError):
+                print("无效输入!请确保输入正确的坐标。")
+
+        # 游戏结束,显示最终结果
+        self.display()
+        if self.win:
+            print("\n恭喜你!你赢了!")
+        else:
+            print("\n游戏结束!你踩到地雷了。")
+
+
+def main():
+    print("欢迎来到扫雷游戏!")
+    print("游戏规则:")
+    print("1. 输入 'r x y' 来揭示坐标 (x,y) 的格子")
+    print("2. 输入 'm x y' 来标记/取消标记可能有地雷的格子")
+    print("3. 数字表示周围8个格子中有多少个地雷")
+    print("4. 标记所有地雷并揭示所有安全格子即可获胜")
+    print("\n按回车键开始游戏...")
+    input()
+
+    # 创建游戏实例(10x10的棋盘,10个地雷)
+    game = Minesweeper(10, 10, 10)
+    game.play()
+
+
+if __name__ == "__main__":
+    main()

Diff do ficheiro suprimidas por serem muito extensas
+ 38 - 0
xinkeaboard-gemini-langgraph_prompt/backend/test-agent.ipynb


+ 44 - 0
xinkeaboard-gemini-langgraph_prompt/docker-compose.yml

@@ -0,0 +1,44 @@
+volumes:
+  langgraph-data:
+    driver: local
+services:
+  langgraph-redis:
+    image: docker.io/redis:6
+    container_name: langgraph-redis
+    healthcheck:
+      test: redis-cli ping
+      interval: 5s
+      timeout: 1s
+      retries: 5
+  langgraph-postgres:
+    image: docker.io/postgres:16
+    container_name: langgraph-postgres
+    ports:
+      - "5433:5432"
+    environment:
+      POSTGRES_DB: postgres
+      POSTGRES_USER: postgres
+      POSTGRES_PASSWORD: postgres
+    volumes:
+      - langgraph-data:/var/lib/postgresql/data
+    healthcheck:
+      test: pg_isready -U postgres
+      start_period: 10s
+      timeout: 1s
+      retries: 5
+      interval: 5s
+  langgraph-api:
+    image: gemini-fullstack-langgraph
+    container_name: langgraph-api
+    ports:
+      - "8123:8000"
+    depends_on:
+      langgraph-redis:
+        condition: service_healthy
+      langgraph-postgres:
+        condition: service_healthy
+    environment:
+      GEMINI_API_KEY: ${GEMINI_API_KEY}
+      LANGSMITH_API_KEY: ${LANGSMITH_API_KEY}
+      REDIS_URI: redis://langgraph-redis:6379
+      POSTGRES_URI: postgres://postgres:postgres@langgraph-postgres:5432/postgres?sslmode=disable

+ 61 - 0
xinkeaboard-gemini-langgraph_prompt/example.md

@@ -0,0 +1,61 @@
+**International Market Report: European Automotive Sector**
+**Date:** July 11, 2025
+**Prepared for:** Chinese Automotive Export Companies
+**Subject:** Market Analysis and Strategic Entry/Expansion Plan for Europe
+
+### 1. Executive Summary
+
+The European car market in 2025 is at a critical inflection point, defined by a rapid, regulation-driven shift towards electrification amidst sluggish overall growth. While total new car registrations show minimal year-on-year growth of 0.1% as of May 2025 [focus2move](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQExs4U4zVnnf2iy4VBRf_M-BXffnxvNO6jJrrMoGucvDLUMIHuHfjvkA4ZfEEZF5Qlga_qqcy34hEBY0Mfbm4F5gHHyacSHMV2Fwmajh6NLSNHPkc3mWsfVCocMz_FWNi8MwqnYu8PPA-mT), this figure masks a dramatic internal restructuring. Sales of Battery Electric Vehicles (BEVs), Plug-in Hybrids (PHEVs), and Hybrid-Electric Vehicles (HEVs) are surging, collectively capturing over 58% of the market [acea](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHB99h_6om7bLArSgS8VuAOg7G_JDzvS3Q7BQjgbahjI2YxE4Pjga-OFawLAOzsxOb_DPWTtyq1KDwfr11BabVmSMrbW9eYDJM32oom0amIxg_LQaM2rsVnlND1OcVbgl3jwvgDcREkCwUv3Wi22oE8ekOeo5rzGDW548Evl20mXqquxyAi-UPMYPYd-Xbu2W85vCXllKfe7WoAuSETezNUheyam_8i3qLEak3bJcFudv87WXZjp6zD) [acea](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQFjWpaqZyZpTW7cEsFk69_vrycEYoAoNaytaF64VX1has0cd31NgL6n3quBHrJmbCBNvY4FHVYylv0fBFK0CF5kBQZmmXanh_mDijT_sOOCCE97T9oQFtfGJJIhO4fUF5l-f_cAxz18nHu876WDrgZ6ljvgBetfWRJZek5K6ITNMxiyNFSBoozsqlO90kqe8T3Jg9rVqc6KAiEAOENHcu8ulFNKDANYI_S_8zrbFCUgGnbqLQywnOEc). Conversely, the market share for traditional petrol and diesel cars has plummeted to just 38.1% [acea](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHB99h_6om7bLArSgS8VuAOg7G_JDzvS3Q7BQjgbahjI2YxE4Pjga-OFawLAOzsxOb_DPWTtyq1KDwfr11BabVmSMrbW9eYDJM32oom0amIxg_LQaM2rsVnlND1OcVbgl3jwvgDcREkCwUv3Wi22oE8ekOeo5rzGDW548Evl20mXqquxyAi-UPMYPYd-Xbu2W85vCXllKfe7WoAuSETezNUheyam_8i3qLEak3bJcFudv87WXZjp6zD).
+
+This transition presents a significant opportunity for Chinese exporters, who are already gaining traction. Chinese brands surpassed a 5% market share for the first time in Q1 2025, with BYD notably overtaking Tesla in European sales in April 2025 [dw](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEKH5HaKTTCaOzfwPihrvSxRW4BmXmhcR4QoZ2eN8m8cn2a-L1ELdrh9nP1hHQZSXi0acYU9901C4KMUULFInEuySvYIzp_HrFrLQwcKba8Fz8TVVS7mO2L5uutNco3doUWgIs2qmtZQXxVgNif64hgeRlC_GjLKzBQzPoICDQQDXp-salBdV0xdv-6s_xOcthybjLSEtfBkmBweOqADH-57ulox0AluhFogvkfYg==). Key opportunities lie in the affordable EV segment (under €25,000), the booming PHEV market, and in providing technologically advanced vehicles that challenge the value proposition of established European brands. Success will require a nuanced strategy focused on regulatory compliance, addressing consumer concerns like charging infrastructure, and leveraging a hybrid sales model.
+
+### 2. Market Insights
+
+**Macroeconomic & Market Overview**
+The European economy is navigating persistent geopolitical uncertainty and slower-than-expected growth, with GDP projected to rise by only 0.9% in 2025 [focus2move](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQExs4U4zVnnf2iy4VBRf_M-BXffnxvNO6jJrrMoGucvDLUMIHuHfjvkA4ZfEEZF5Qlga_qqcy34hEBY0Mfbm4F5gHHyacSHMV2Fwmajh6NLSNHPkc3mWsfVCocMz_FWNi8MwqnYu8PPA-mT). This has contributed to a stagnant overall car market, which totaled 6.05 million units through May 2025, a marginal 0.1% increase from the prior year [focus2move](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQExs4U4zVnnf2iy4VBRf_M-BXffnxvNO6jJrrMoGucvDLUMIHuHfjvkA4ZfEEZF5Qlga_qqcy34hEBY0Mfbm4F5gHHyacSHMV2Fwmajh6NLSNHPkc3mWsfVCocMz_FWNi8MwqnYu8PPA-mT). However, the market is highly dynamic, with significant variations by country; Spain, for example, defied the trend with 13.6% growth [focus2move](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGLrq8qFjD-3j4JfZ16QvRCTREJHLzhDmg5bvcn19PFgMnDyURFzl20TRgD30qbAXsbleZou1nXkYYHggOZivctzqhlk76bapnZAu3j-Twai7EAkdSZYTBYJcV5yDZHNjHWcJty-B3VFGIeKcZ57oE27bHcN3mDGiLuvpPa1).
+
+**The Irreversible Shift to Electrification**
+The market is fundamentally reorienting away from internal combustion engines (ICE).
+*   **Hybrid Dominance (HEV):** Non-rechargeable hybrids are currently the most popular choice for European consumers, capturing a commanding **35.1%** market share. This reflects a consumer preference for a practical transition, mitigating concerns about charging infrastructure [acea](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHB99h_6om7bLArSgS8VuAOg7G_JDzvS3Q7BQjgbahjI2YxE4Pjga-OFawLAOzsxOb_DPWTtyq1KDwfr11BabVmSMrbW9eYDJM32oom0amIxg_LQaM2rsVnlND1OcVbgl3jwvgDcREkCwUv3Wi22oE8ekOeo5rzGDW548Evl20mXqquxyAi-UPMYPYd-Xbu2W85vCXllKfe7WoAuSETezNUheyam_8i3qLEak3bJcFudv87WXZjp6zD).
+*   **BEV Growth:** Battery-electric vehicles now account for **15.4%** of the market, with over 701,000 units sold in the first five months of 2025 [acea](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHB99h_6om7bLArSgS8VuAOg7G_JDzvS3Q7BQjgbahjI2YxE4Pjga-OFawLAOzsxOb_DPWTtyq1KDwfr11BabVmSMrbW9eYDJM32oom0amIxg_LQaM2rsVnlND1OcVbgl3jwvgDcREkCwUv3Wi22oE8ekOeo5rzGDW548Evl20mXqquxyAi-UPMYPYd-Xbu2W85vCXllKfe7WoAuSETezNUheyam_8i3qLEak3bJcFudv87WXZjp6zD).
+*   **PHEV Surge:** Plug-in hybrids are experiencing explosive growth, with sales soaring **46%** in May 2025 alone. They are seen as an ideal "best of both worlds" solution by many consumers [evxl](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGc4DQPlfMsG2TiZCMVu4xWcK8Ui1m7pt0bswT3tbYWcH4Vn5O1fvrFnUrtFYkPlfRKmINO0X9PVobJ0W9MdZ3xG5xNmF9hlvPCwCA92pMqiiSoSq1blVf1Cs7qRdww7TNyAxcjsiUKqWmYg9LUxQ==).
+*   **ICE Decline:** The combined market share of petrol and diesel cars has fallen to **38.1%**, down from 48.5% in the same period of 2024, with petrol car registrations dropping 20.2% [acea](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHB99h_6om7bLArSgS8VuAOg7G_JDzvS3Q7BQjgbahjI2YxE4Pjga-OFawLAOzsxOb_DPWTtyq1KDwfr11BabVmSMrbW9eYDJM32oom0amIxg_LQaM2rsVnlND1OcVbgl3jwvgDcREkCwUv3Wi22oE8ekOeo5rzGDW548Evl20mXqquxyAi-UPMYPYd-Xbu2W85vCXllKfe7WoAuSETezNUheyam_8i3qLEak3bJcFudv87WXZjp6zD) [assetfinanceconnect](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHQc6lT600o7XeqD4GEMITRdqOPQjPQMcyHLefY5Yrr0Xj9xuLnWxXKKw94ob0jdmu_NenTuLSks9oh9GU0VhHHj50w-nBNn6kWaK8gLosWW6p9GOSObL1Z8SQSp7twaDFQFcbuHdMv37pArLOuO1DH6aPnPeDIiw1tzf50W2sj0a2j7zo75_L0sad4Hf151jajh0HTX1Cecjgm8E5g56E=).
+
+**Consumer Preferences & Unmet Needs**
+While policy pushes for full electrification, consumer behavior is more cautious. The popularity of hybrids highlights widespread "range anxiety" and skepticism about the public charging infrastructure, which, despite reaching 1 million points, is still considered inadequate [dw](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEKH5HaKTTCaOzfwPihrvSxRW4BmXmhcR4QoZ2eN8m8cn2a-L1ELdrh9nP1hHQZSXi0acYU9901C4KMUULFInEuySvYIzp_HrFrLQwcKba8Fz8TVVS7mO2L5uutNco3doUWgIs2qmtZQXxVgNif64hgeRlC_GjLKzBQzPoICDQQDXp-salBdV0xdv-6s_xOcthybjLSEtfBkmBweOqADH-57ulox0AluhFogvkfYg==). A significant unmet need exists for **affordable EVs**, particularly models priced below €25,000, a segment where European manufacturers have been slow to deliver [ioplus](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGLrq8qFjD-3j4JfZ16QvRCTREJHLzhDmg5bvcn19PFgMnDyURFzl20TRgD30qbAXsbleZou1nXkYYHggOZivctzqhlk76bapnZAu3j-Twai7EAkdSZYTBYJcV5yDZHNjHWcJty-B3VFGIeKcZ57oE27bHcN3mDGiLuvpPa3). Consumers also increasingly expect advanced digital features, connectivity, and flexible ownership models like subscriptions [cities-today](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQELHzB1SWj7AUN58L_o1UUyVkyZ6ABUmEdgmc9CY5TWp80uAIyTo9SepFUsqRZafEKG5ZNLeoTnfmT1R_oTN1-nT7nhcPELoaz2fN1qSEoxRJBoJorEfws9rkvUVBxW0TC-rwPigB8yEL_b-aOiX6CqW6MUlTjDPnJsiB-9IdNYmmeNV_3WtTzFDhvdp_c9_aY08d3MjYxK3-pSQCsRoRc-5bGwj-AdXtJKBTzc7ZwmaD3W8cSz).
+
+**Competitive Landscape**
+*   **Incumbent Leaders:** Volkswagen Group remains Europe's largest carmaker, holding a 25.9% market share in Q1 2025, followed by Stellantis and Renault [best-selling-cars](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEtSeHbiagDg6xd5poe2EL61f6tGTpaodYfUEZfrfAgo3IF30LJ3wXbJ06c7617juIHwK8HCukURQkTLMy7y-sZPTfFYuACoOWNo-36b98C6gyDRMzoGPEW8LJ_ibw2-aYPHng6j9W9wSH-rvb8QFgxq1rvGxvPRsKXSVSzSo-N9EkvKeAz4u7kHHwE-L3nr7cVS7H9IKMqXWlCUzg1).
+*   **Tesla's Decline:** Tesla has faced significant headwinds, with sales plummeting 39% across Europe from January to April 2025, partly due to brand perception issues and increased competition [dw](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEKH5HaKTTCaOzfwPihrvSxRW4BmXmhcR4QoZ2eN8m8cn2a-L1ELdrh9nP1hHQZSXi0acYU9901C4KMUULFInEuySvYIzp_HrFrLQwcKba8Fz8TVVS7mO2L5uutNco3doUWgIs2qmtZQXxVgNif64hgeRlC_GjLKzBQzPoICDQQDXp-salBdV0xdv-6s_xOcthybjLSEtfBkmBweOqADH-57ulox0AluhFogvkfYg==).
+*   **The Rise of Chinese Brands:** Chinese manufacturers are emerging as a formidable competitive force. Their market share surpassed 5% in Q1 2025, and BYD's sales success demonstrates a growing acceptance among European buyers [dw](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEKH5HaKTTCaOzfwPihrvSxRW4BmXmhcR4QoZ2eN8m8cn2a-L1ELdrh9nP1hHQZSXi0acYU9901C4KMUULFInEuySvYIzp_HrFrLQwcKba8Fz8TVVS7mO2L5uutNco3doUWgIs2qmtZQXxVgNif64hgeRlC_GjLKzBQzPoICDQQDXp-salBdV0xdv-6s_xOcthybjLSEtfBkmBweOqADH-57ulox0AluhFogvkfYg==). This success is built on competitive pricing and strong product offerings in the EV space.
+
+**Regulatory & E-commerce Dynamics**
+*   **Stringent Regulations:** The EU's regulatory framework is the primary market driver. The **2025 CO2 target** of 93.6 g/km (average) forces manufacturers to sell more low-emission vehicles or face substantial fines [mobilityportal](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEtqkhnbAigIyC-ECKGk9TPogtkawPIr8NvZ90hoO2vsdElE4PqzjWizt8CdrI7HzpbaH1wv8jsm40HqphoTlvKiT4LmLazimz8SPGrhzKb7Buzt1cMsC-wJZ4WR9j-Ebz9urRo-lKOIYcPJcaypKU=). The path is set for a 100% zero-emission mandate for new cars by 2035 [forbes](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQFmayjW0T-nTTKH67H5poQRnYcvmMB60RY-FbvDddXLMF4ORzucu0w_0_RKnkFjUEKRPotB1sAYVr0lwNCfDJk0HxODbIqyn88G6XJAOiult_MPaPAx7BCn-xnn1BKm9A_kn5sCFnUgQv3HqjY8-w1erYlSLaPcdDj-URh30EkZUEj35dPZ70dklAuZuX1IDQ4Yr333dGl-07Epnk1G3sNmiC9AHMpCa82WnRwRbg==).
+*   **Supply Chain Localization:** The EU is actively promoting supply chain resilience through initiatives like the European Battery Alliance and the Critical Raw Materials Act, aiming to localize battery production and recycling [automotivelogistics](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQHj-1GcnuFh2cEVPMViYGMS4CcCnFhVlqzRKdcccme_cwzoHvNJjOXfO1Q8guGatIALY1IKX87aYbpg9IITAPo5r8BEx44lrV0YnHj1KVNTv94oh6TnM9FmdnI9voMJne7jSk0woILTZMgt8m6oOs2iOIqaD8FyStTXxeMndn5ggigKPe-HQ7jxY37J41qbFYIqAnsEGzAF9em5MYWfoAhUrDPF2eSCltba6VJoCdTMbru5Mu2Emn0WUO2ptDKOjXt_CntOBd-mI9a6XrHv92mmz3Abo54YXZHf3RsDx4-o0ip7ufN49mbrsR2KtEvq2Lqd9H9yDtpMU5ICE-XNxw==) [europa](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGLrq8qFjD-3j4JfZ16QvRCTREJHLzhDmg5bvcn19PFgMnDyURFzl20TRgD30qbAXsbleZou1nXkYYHggOZivctzqhlk76bapnZAu3j-Twai7EAkdSZYTBYJcV5yDZHNjHWcJty-B3VFGIeKcZ57oE27bHcN3mDGiLuvpPa2). A "battery passport" will be mandatory from 2027, tracking materials and carbon footprint [leadventgrp](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQEGbnZA9xGfO7V37YOsxZ8NetKbkul02lx0-GAmyiToaLtIgSo36Q5tQNX35_Qv4NQ0Z6GUzWWuH2TkRv8J5cNbaRpXkng5xk72UgYHmUqYColFU8HJuTCnqlAvm6qOQz0ak68TrgEZOYeGmKkt-0kOjEpQDycqqKy1jcAvtTYMr8X8M1GEymhBuXw=5).
+*   **Sales Channels:** While traditional dealerships remain important for service and test drives, there is a clear trend towards hybrid sales models that combine online direct-to-consumer (D2C) platforms with physical "experience centers." B2B fleet sales are also a critical channel, as corporate fleets are under pressure to electrify [rhomotion](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGccc7VR785MWUrcMJjNgobmWpesT2SAolSutoqQ7OOLrXSjhcQeBuPcAJOzCachWEKFrutHilnewcmT8wlV7nnKgqrkCykEkpDssrrljXFZ1wAWIiyh1-AfhwisxhFu9Qb391FhK9kR-0pcQAiOHV4oa9mtA==).
+
+### 3. Strategic Recommendations for Chinese Exporters
+
+**Optimal Product Combinations**
+*   **Lead with Affordable BEVs:** Focus on the sub-€25,000 compact and city car segment. This is a major market gap with high demand. Models like the BYD Dolphin have proven this strategy works [go-e](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQFciYgPixt5Xo6oG60T-8PusUMKcnS12ZqK63wAsUqi6VCfovjtlE5BK_DAGboZAwxk9NI_k-_9kCTEw8HaS3Ov0pWnCS-XR_ZZS3c7pxWvMQ03liR2HjuzM-yln4PKrWDWx60SmHPmo_CX2L3RJibRerlwVVTvgYIpHliJ8A7T).
+*   **Offer Advanced PHEVs:** Capitalize on the booming demand for plug-in hybrids. Offer models with a competitive electric-only range (>60km), fast charging capabilities, and rich standard features to serve as a "bridge" product for consumers not yet ready for full BEVs.
+*   **Compete with Feature-Rich HEVs:** Given that hybrids are the largest market segment, offer competitively priced hybrid models (SUVs and hatchbacks) that outperform European rivals on standard equipment, technology, and warranty.
+
+**Positioning Recommendations**
+*   **The Smart Value Leader:** Position your brand as the intelligent choice, offering superior technology, safety features (ADAS), and connectivity as standard, not as expensive options. This directly counters the pricing strategy of many European incumbents.
+*   **Eco-Compliant & Transparent:** Proactively market your compliance with all EU regulations, including the upcoming 2027 battery passport. Emphasize sustainable manufacturing and the use of recycled materials to build trust with environmentally conscious European buyers.
+*   **Technology-Forward Innovator:** Showcase advanced in-car infotainment, seamless smartphone integration, and innovative features like Vehicle-to-Load (V2L) or Vehicle-to-Grid (V2G) capabilities to establish a reputation for cutting-edge technology.
+
+**Pricing Strategies**
+*   **Aggressive Segment Entry:** Price BEV and PHEV models aggressively to undercut key European competitors (e.g., VW ID series, Peugeot e-208) by 10-15% while offering a superior level of standard equipment.
+*   **Target the Price Gap:** Analyze the market for popular European models where the base version is poorly equipped. Introduce a single, high-spec trim level of your competing model at a price point that matches or slightly undercuts their entry-level price.
+*   **Transparent, All-Inclusive Pricing:** Adopt a "what you see is what you get" pricing model, common in D2C sales, to build trust and simplify the purchasing process, avoiding the complex and costly option lists of legacy automakers.
+
+**Preferred Sales Channels**
+*   **Hybrid D2C & Dealer Network:** Implement a dual strategy. Use a direct-to-consumer (D2C) online platform for transparent pricing and ordering. Simultaneously, partner with established multi-brand dealer groups or open brand-owned "experience centers" in major cities for test drives, handovers, and after-sales service.
+*   **Focus on B2B Fleet Sales:** Create a dedicated fleet sales division to target corporate and rental car company clients. These clients are highly motivated by Total Cost of Ownership (TCO) and regulatory pressure to electrify, making them receptive to value-oriented EV and PHEV offers [rhomotion](https://vertexaisearch.cloud.google.com/grounding-api-redirect/AUZIYQGccc7VR785MWUrcMJjNgobmWpesT2SAolSutoqQ7OOLrXSjhcQeBuPcAJOzCachWEKFrutHilnewcmT8wlV7nnKgqrkCykEkpDssrrljXFZ1wAWIiyh1-AfhwisxhFu9Qb391FhK9kR-0pcQAiOHV4oa9mtA==).
+
+**Bundling or Upsell Tactics**
+*   **The "EV Starter" Package:** Bundle the vehicle purchase with a home wallbox charger and professional installation service. This directly addresses a primary barrier to EV adoption and provides immense customer value.
+*   **Battery-as-a-Service (BaaS):** Consider offering a BaaS model where the customer buys the car and leases the battery. This significantly lowers the initial purchase price, making EVs accessible to a wider audience.
+*   **Comprehensive Service & Warranty Bundles:** Offer an industry-leading warranty (e.g., 7-10 years) bundled with a multi-year service and insurance package. This "peace of mind" offering is a powerful differentiator that builds long-term customer loyalty.

Alguns ficheiros não foram mostrados porque muitos ficheiros mudaram neste diff