{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Welcome to the start of your adventure in Agentic AI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Are you ready for action??

\n", " Have you completed all the setup steps in the setup folder?
\n", " Have you read the README? Many common questions are answered here!
\n", " Have you checked out the guides in the guides folder?
\n", " Well in that case, you're ready!!\n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

This code is a live resource - keep an eye out for my updates

\n", " I push updates regularly. As people ask questions or have problems, I add more examples and improve explanations. As a result, the code below might not be identical to the videos, as I've added more steps and better comments. Consider this like an interactive book that accompanies the lectures.

\n", " I try to send emails regularly with important updates related to the course. You can find this in the 'Announcements' section of Udemy in the left sidebar. You can also choose to receive my emails via your Notification Settings in Udemy. I'm respectful of your inbox and always try to add value with my emails!\n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### And please do remember to contact me if I can help\n", "\n", "And I love to connect: https://www.linkedin.com/in/eddonner/\n", "\n", "\n", "### New to Notebooks like this one? Head over to the guides folder!\n", "\n", "Just to check you've already added the Python and Jupyter extensions to Cursor, if not already installed:\n", "- Open extensions (View >> extensions)\n", "- Search for python, and when the results show, click on the ms-python one, and Install it if not already installed\n", "- Search for jupyter, and when the results show, click on the Microsoft one, and Install it if not already installed \n", "Then View >> Explorer to bring back the File Explorer.\n", "\n", "And then:\n", "1. Click where it says \"Select Kernel\" near the top right, and select the option called `.venv (Python 3.12.9)` or similar, which should be the first choice or the most prominent choice. You may need to choose \"Python Environments\" first.\n", "2. Click in each \"cell\" below, starting with the cell immediately below this text, and press Shift+Enter to run\n", "3. Enjoy!\n", "\n", "After you click \"Select Kernel\", if there is no option like `.venv (Python 3.12.9)` then please do the following: \n", "1. On Mac: From the Cursor menu, choose Settings >> VS Code Settings (NOTE: be sure to select `VSCode Settings` not `Cursor Settings`); \n", "On Windows PC: From the File menu, choose Preferences >> VS Code Settings(NOTE: be sure to select `VSCode Settings` not `Cursor Settings`) \n", "2. In the Settings search bar, type \"venv\" \n", "3. In the field \"Path to folder with a list of Virtual Environments\" put the path to the project root, like C:\\Users\\username\\projects\\agents (on a Windows PC) or /Users/username/projects/agents (on Mac or Linux). \n", "And then try again.\n", "\n", "Having problems with missing Python versions in that list? Have you ever used Anaconda before? It might be interferring. Quit Cursor, bring up a new command line, and make sure that your Anaconda environment is deactivated: \n", "`conda deactivate` \n", "And if you still have any problems with conda and python versions, it's possible that you will need to run this too: \n", "`conda config --set auto_activate_base false` \n", "and then from within the Agents directory, you should be able to run `uv python list` and see the Python 3.12 version." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# First let's do an import. If you get an Import Error, double check that your Kernel is correct..\n", "\n", "from dotenv import load_dotenv\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Next it's time to load the API keys into environment variables\n", "# If this returns false, see the next cell!\n", "\n", "load_dotenv(override=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wait, did that just output `False`??\n", "\n", "If so, the most common reason is that you didn't save your `.env` file after adding the key! Be sure to have saved.\n", "\n", "Also, make sure the `.env` file is named precisely `.env` and is in the project root directory (`agents`)\n", "\n", "By the way, your `.env` file should have a stop symbol next to it in Cursor on the left, and that's actually a good thing: that's Cursor saying to you, \"hey, I realize this is a file filled with secret information, and I'm not going to send it to an external AI to suggest changes, because your keys should not be shown to anyone else.\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Final reminders

\n", " 1. If you're not confident about Environment Variables or Web Endpoints / APIs, please read Topics 3 and 5 in this technical foundations guide.
\n", " 2. If you want to use AIs other than OpenAI, like Gemini, DeepSeek or Ollama (free), please see the first section in this AI APIs guide.
\n", " 3. If you ever get a Name Error in Python, you can always fix it immediately; see the last section of this Python Foundations guide and follow both tutorials and exercises.
\n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "GEMINI_API_KEY exists and begins AIzaSyBJ\n" ] } ], "source": [ "# Check the key - if you're not using OpenAI, check whichever key you're using! Ollama doesn't need a key.\n", "\n", "import os\n", "genimi_api_key = os.getenv('GEMINI_API_KEY')\n", "\n", "if genimi_api_key:\n", " print(f\"GEMINI_API_KEY exists and begins {genimi_api_key[:8]}\")\n", "else:\n", " print(\"GEMINI_API_KEY not set - please head to the troubleshooting guide in the setup folder\")\n", " \n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# And now - the all important import statement\n", "# If you get an import error - head over to troubleshooting in the Setup folder\n", "# Even for other LLM providers like Gemini, you still use this OpenAI import - see Guide 9 for why\n", "\n", "from google import genai" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# And now we'll create an instance of the OpenAI class\n", "# If you're not sure what it means to create an instance of a class - head over to the guides folder (guide 6)!\n", "# If you get a NameError - head over to the guides folder (guide 6)to learn about NameErrors - always instantly fixable\n", "# If you're not using OpenAI, you just need to slightly modify this - precise instructions are in the AI APIs guide (guide 9)\n", "\n", "client = genai.Client()" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "AI learns patterns from data to make predictions or decisions.\n" ] } ], "source": [ "# And now call it! Any problems, head to the troubleshooting guide\n", "# This uses GPT 4.1 nano, the incredibly cheap model\n", "# The APIs guide (guide 9) has exact instructions for using even cheaper or free alternatives to OpenAI\n", "# If you get a NameError, head to the guides folder (guide 6) to learn about NameErrors - always instantly fixable\n", "response = client.models.generate_content(\n", " model=\"gemini-2.5-flash\",\n", " contents=\"Explain how AI works in a few words\",\n", ")\n", "\n", "print(response.text)\n" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# And now - let's ask for a question:\n", "\n", "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n", "messages = [{\"role\": \"user\", \"content\": question}]\n" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "A circular table has *n* equally spaced seats. *n* guests are invited, but each guest either accepts or declines independently with probability 1/2. If a guest accepts, they are seated. If two seated guests are adjacent, one guest (chosen randomly) must immediately leave.\n", "\n", "What is the expected number of seated guests after this process terminates, expressed as a function of *n*?\n", "\n" ] } ], "source": [ "# ask it - this uses GPT 4.1 mini, still cheap but more powerful than nano\n", "# response = client.models.generate_content(\n", "# model=\"gemini-2.0-flash\",\n", "# contents=question,\n", "# )\n", "question = response.text\n", "print(question)\n" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Let $E(n)$ be the expected number of seated guests after the process terminates.\n", "Let $X_i$ be an indicator random variable that is 1 if the $i$-th guest accepts the invitation and 0 otherwise. Then $P(X_i = 1) = 1/2$ and $P(X_i = 0) = 1/2$.\n", "If two seated guests are adjacent, one guest (chosen randomly) must immediately leave.\n", "\n", "If $n=1$, then if the guest accepts the invitation, they are seated, so $E(1) = \\frac{1}{2}$.\n", "If $n=2$, if both guests accept the invitation, one must leave. So we have the possibilities:\n", "\\begin{itemize}\n", " \\item Guest 1 accepts, Guest 2 declines: 1 seated guest.\n", " \\item Guest 1 declines, Guest 2 accepts: 1 seated guest.\n", " \\item Guest 1 accepts, Guest 2 accepts: 1 seated guest.\n", " \\item Guest 1 declines, Guest 2 declines: 0 seated guests.\n", "\\end{itemize}\n", "$E(2) = \\frac{1}{4}(1) + \\frac{1}{4}(1) + \\frac{1}{4}(1) + \\frac{1}{4}(0) = \\frac{3}{4}$.\n", "\n", "If $n=3$, we have the following possibilities:\n", "\\begin{itemize}\n", " \\item 0 guests accept: 0\n", " \\item 1 guest accepts: 1\n", " \\item 2 guests accept:\n", " \\begin{itemize}\n", " \\item Adjacent: 1\n", " \\item Not adjacent: 2\n", " \\end{itemize}\n", " \\item 3 guests accept: 1\n", "\\end{itemize}\n", "Probability of $k$ guests accepting is $\\binom{n}{k} \\left(\\frac{1}{2}\\right)^n$.\n", "So $P(\\text{0 guests accept}) = \\binom{3}{0} \\left(\\frac{1}{2}\\right)^3 = \\frac{1}{8}$\n", "$P(\\text{1 guest accepts}) = \\binom{3}{1} \\left(\\frac{1}{2}\\right)^3 = \\frac{3}{8}$\n", "$P(\\text{2 guests accept}) = \\binom{3}{2} \\left(\\frac{1}{2}\\right)^3 = \\frac{3}{8}$\n", "$P(\\text{3 guests accept}) = \\binom{3}{3} \\left(\\frac{1}{2}\\right)^3 = \\frac{1}{8}$\n", "If 2 guests accept, they can be adjacent in 3 ways (12, 23, 31). So the probability of 2 guests accepting and being adjacent is $\\frac{3}{8}$. The probability of 2 guests accepting and not being adjacent is 0.\n", "If the two guests are adjacent, one leaves, so we have 1 guest.\n", "If the two guests are not adjacent, both stay, so we have 2 guests.\n", "$E(3) = \\frac{1}{8}(0) + \\frac{3}{8}(1) + \\frac{3}{8}(1) + \\frac{1}{8}(1) = \\frac{7}{8}$.\n", "\n", "Consider $n$ equally spaced seats. Let $a_n$ be the expected number of seated guests.\n", "$E(n) = \\frac{n}{3}$\n", "\n", "Final Answer: The final answer is $\\boxed{n/3}$\n" ] } ], "source": [ "# Ask it again\n", "\n", "response = client.models.generate_content(\n", " model=\"gemini-2.0-flash\",\n", " contents=question,\n", ")\n", "\n", "answer = response.text\n", "print(answer)\n" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "Let $E(n)$ be the expected number of seated guests after the process terminates.\n", "Let $X_i$ be an indicator random variable that is 1 if the $i$-th guest accepts the invitation and 0 otherwise. Then $P(X_i = 1) = 1/2$ and $P(X_i = 0) = 1/2$.\n", "If two seated guests are adjacent, one guest (chosen randomly) must immediately leave.\n", "\n", "If $n=1$, then if the guest accepts the invitation, they are seated, so $E(1) = \\frac{1}{2}$.\n", "If $n=2$, if both guests accept the invitation, one must leave. So we have the possibilities:\n", "\\begin{itemize}\n", " \\item Guest 1 accepts, Guest 2 declines: 1 seated guest.\n", " \\item Guest 1 declines, Guest 2 accepts: 1 seated guest.\n", " \\item Guest 1 accepts, Guest 2 accepts: 1 seated guest.\n", " \\item Guest 1 declines, Guest 2 declines: 0 seated guests.\n", "\\end{itemize}\n", "$E(2) = \\frac{1}{4}(1) + \\frac{1}{4}(1) + \\frac{1}{4}(1) + \\frac{1}{4}(0) = \\frac{3}{4}$.\n", "\n", "If $n=3$, we have the following possibilities:\n", "\\begin{itemize}\n", " \\item 0 guests accept: 0\n", " \\item 1 guest accepts: 1\n", " \\item 2 guests accept:\n", " \\begin{itemize}\n", " \\item Adjacent: 1\n", " \\item Not adjacent: 2\n", " \\end{itemize}\n", " \\item 3 guests accept: 1\n", "\\end{itemize}\n", "Probability of $k$ guests accepting is $\\binom{n}{k} \\left(\\frac{1}{2}\\right)^n$.\n", "So $P(\\text{0 guests accept}) = \\binom{3}{0} \\left(\\frac{1}{2}\\right)^3 = \\frac{1}{8}$\n", "$P(\\text{1 guest accepts}) = \\binom{3}{1} \\left(\\frac{1}{2}\\right)^3 = \\frac{3}{8}$\n", "$P(\\text{2 guests accept}) = \\binom{3}{2} \\left(\\frac{1}{2}\\right)^3 = \\frac{3}{8}$\n", "$P(\\text{3 guests accept}) = \\binom{3}{3} \\left(\\frac{1}{2}\\right)^3 = \\frac{1}{8}$\n", "If 2 guests accept, they can be adjacent in 3 ways (12, 23, 31). So the probability of 2 guests accepting and being adjacent is $\\frac{3}{8}$. The probability of 2 guests accepting and not being adjacent is 0.\n", "If the two guests are adjacent, one leaves, so we have 1 guest.\n", "If the two guests are not adjacent, both stay, so we have 2 guests.\n", "$E(3) = \\frac{1}{8}(0) + \\frac{3}{8}(1) + \\frac{3}{8}(1) + \\frac{1}{8}(1) = \\frac{7}{8}$.\n", "\n", "Consider $n$ equally spaced seats. Let $a_n$ be the expected number of seated guests.\n", "$E(n) = \\frac{n}{3}$\n", "\n", "Final Answer: The final answer is $\\boxed{n/3}$" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from IPython.display import Markdown, display\n", "\n", "display(Markdown(answer))\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Congratulations!\n", "\n", "That was a small, simple step in the direction of Agentic AI, with your new environment!\n", "\n", "Next time things get more interesting..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Exercise

\n", " Now try this commercial application:
\n", " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.
\n", " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.
\n", " Finally have 3 third LLM call propose the Agentic AI solution.
\n", " We will cover this at up-coming labs, so don't worry if you're unsure.. just give it a try!\n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "bussiness idea: ## Business Area: Personalized Healthcare Navigation and Coordination\n", "\n", "This area is ripe for Agentic AI because it involves:\n", "\n", "* **Complex, personalized needs:** Individuals have unique health conditions, insurance plans, access to resources, and preferences.\n", "* **Fragmented information:** Health information is scattered across providers, insurance companies, pharmacies, and personal records.\n", "* **High emotional stakes:** People are often stressed and confused when dealing with healthcare issues.\n", "* **Repetitive tasks:** Booking appointments, tracking medications, submitting insurance claims, and gathering test results are time-consuming.\n", "\n", "**Agentic AI Opportunity:**\n", "\n", "An Agentic AI system could act as a **Personalized Healthcare Navigator and Coordinator** for individuals, offering the following services:\n", "\n", "* **Understanding Individual Needs:**\n", " * Securely access and analyze medical records, insurance plans, and personal preferences through user consent and APIs.\n", " * Use NLP to understand user needs and concerns expressed through text or voice.\n", " * Maintain a comprehensive and up-to-date user profile.\n", "* **Proactive Healthcare Recommendations:**\n", " * Identify potential health risks based on user data and suggest preventative screenings or lifestyle changes.\n", " * Recommend specialists and providers based on user's condition, insurance, location, and preferences, leveraging rating and review data.\n", "* **Seamless Appointment Scheduling:**\n", " * Automatically check availability, negotiate appointment times with providers, and send reminders.\n", " * Handle rescheduling and cancellations.\n", "* **Medication Management:**\n", " * Remind users to take medication, track refills, and identify potential drug interactions.\n", " * Help users find the best prices for their medications.\n", "* **Insurance Claim Support:**\n", " * Automatically submit insurance claims and track their progress.\n", " * Help users understand their coverage and navigate complex billing processes.\n", " * Identify and appeal denied claims.\n", "* **Health Education and Support:**\n", " * Provide personalized educational materials and resources based on user's condition and interests.\n", " * Connect users with support groups and communities.\n", "* **Emergency Assistance:**\n", " * Recognize signs of a medical emergency and connect users with appropriate help.\n", "\n", "**Why Agentic AI is well-suited:**\n", "\n", "* **Autonomy:** The agent can independently schedule appointments, track claims, and proactively manage healthcare tasks without constant user intervention.\n", "* **Adaptability:** The agent can learn from user interactions and continuously improve its recommendations and support.\n", "* **Personalization:** The agent can tailor its approach to each individual's unique needs and preferences.\n", "* **Reasoning:** The agent can analyze complex medical information and insurance policies to provide informed guidance.\n", "\n", "**Potential Benefits:**\n", "\n", "* **Improved health outcomes:** By helping individuals proactively manage their health and access the right care at the right time.\n", "* **Reduced healthcare costs:** By preventing unnecessary hospitalizations and improving medication adherence.\n", "* **Increased patient satisfaction:** By simplifying the healthcare process and providing personalized support.\n", "* **Reduced administrative burden for healthcare providers:** By automating tasks such as appointment scheduling and insurance claim submission.\n", "\n", "**Challenges:**\n", "\n", "* **Data privacy and security:** Protecting sensitive medical information is crucial.\n", "* **Trust and transparency:** Users need to trust the agent and understand its decision-making process.\n", "* **Integration with existing healthcare systems:** Integrating with fragmented systems and data sources is complex.\n", "* **Regulation and compliance:** Navigating the complex regulatory landscape of healthcare.\n", "\n", "Despite these challenges, the potential benefits of an Agentic AI-powered healthcare navigator and coordinator are significant, making it a promising area for exploration. It aligns with the trends of personalized medicine, preventative care, and patient empowerment, all while streamlining complex administrative tasks.\n", "\n" ] } ], "source": [ "# First create the messages:\n", "\n", "messages = \"pick a business area that might be worth exploring for an Agentic AI opportunity\"\n", "response = client.models.generate_content(\n", " model=\"gemini-2.0-flash\",\n", " contents=messages\n", ")\n", "# Then make the first call:\n", "\n", "\n", "# Then read the business idea:\n", "\n", "business_idea = response.text\n", "print(\"bussiness idea: \", business_idea)\n", "# And repeat! In the next message, include the business idea within the message" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "## Business Area: Personalized Healthcare Navigation and Coordination\n", "\n", "This area is ripe for Agentic AI because it involves:\n", "\n", "* **Complex, personalized needs:** Individuals have unique health conditions, insurance plans, access to resources, and preferences.\n", "* **Fragmented information:** Health information is scattered across providers, insurance companies, pharmacies, and personal records.\n", "* **High emotional stakes:** People are often stressed and confused when dealing with healthcare issues.\n", "* **Repetitive tasks:** Booking appointments, tracking medications, submitting insurance claims, and gathering test results are time-consuming.\n", "\n", "**Agentic AI Opportunity:**\n", "\n", "An Agentic AI system could act as a **Personalized Healthcare Navigator and Coordinator** for individuals, offering the following services:\n", "\n", "* **Understanding Individual Needs:**\n", " * Securely access and analyze medical records, insurance plans, and personal preferences through user consent and APIs.\n", " * Use NLP to understand user needs and concerns expressed through text or voice.\n", " * Maintain a comprehensive and up-to-date user profile.\n", "* **Proactive Healthcare Recommendations:**\n", " * Identify potential health risks based on user data and suggest preventative screenings or lifestyle changes.\n", " * Recommend specialists and providers based on user's condition, insurance, location, and preferences, leveraging rating and review data.\n", "* **Seamless Appointment Scheduling:**\n", " * Automatically check availability, negotiate appointment times with providers, and send reminders.\n", " * Handle rescheduling and cancellations.\n", "* **Medication Management:**\n", " * Remind users to take medication, track refills, and identify potential drug interactions.\n", " * Help users find the best prices for their medications.\n", "* **Insurance Claim Support:**\n", " * Automatically submit insurance claims and track their progress.\n", " * Help users understand their coverage and navigate complex billing processes.\n", " * Identify and appeal denied claims.\n", "* **Health Education and Support:**\n", " * Provide personalized educational materials and resources based on user's condition and interests.\n", " * Connect users with support groups and communities.\n", "* **Emergency Assistance:**\n", " * Recognize signs of a medical emergency and connect users with appropriate help.\n", "\n", "**Why Agentic AI is well-suited:**\n", "\n", "* **Autonomy:** The agent can independently schedule appointments, track claims, and proactively manage healthcare tasks without constant user intervention.\n", "* **Adaptability:** The agent can learn from user interactions and continuously improve its recommendations and support.\n", "* **Personalization:** The agent can tailor its approach to each individual's unique needs and preferences.\n", "* **Reasoning:** The agent can analyze complex medical information and insurance policies to provide informed guidance.\n", "\n", "**Potential Benefits:**\n", "\n", "* **Improved health outcomes:** By helping individuals proactively manage their health and access the right care at the right time.\n", "* **Reduced healthcare costs:** By preventing unnecessary hospitalizations and improving medication adherence.\n", "* **Increased patient satisfaction:** By simplifying the healthcare process and providing personalized support.\n", "* **Reduced administrative burden for healthcare providers:** By automating tasks such as appointment scheduling and insurance claim submission.\n", "\n", "**Challenges:**\n", "\n", "* **Data privacy and security:** Protecting sensitive medical information is crucial.\n", "* **Trust and transparency:** Users need to trust the agent and understand its decision-making process.\n", "* **Integration with existing healthcare systems:** Integrating with fragmented systems and data sources is complex.\n", "* **Regulation and compliance:** Navigating the complex regulatory landscape of healthcare.\n", "\n", "Despite these challenges, the potential benefits of an Agentic AI-powered healthcare navigator and coordinator are significant, making it a promising area for exploration. It aligns with the trends of personalized medicine, preventative care, and patient empowerment, all while streamlining complex administrative tasks.\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(Markdown(business_idea))" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "pain-point: Okay, let's focus on the **e-commerce industry**, specifically on the challenge of **optimizing product descriptions and metadata for maximum discoverability and conversion.**\n", "\n", "**Pain Point: Inconsistent and Ineffective Product Information at Scale**\n", "\n", "E-commerce companies, especially those with large catalogs (think Amazon, Wayfair, big box retailers, etc.), struggle with maintaining consistent and high-quality product information across their entire inventory. This includes:\n", "\n", "* **Inconsistent Formatting:** Different vendors provide product information in wildly varying formats (spreadsheets, PDFs, XML feeds, etc.) and with differing levels of detail.\n", "* **Lack of Keyword Optimization:** Product descriptions often lack relevant keywords that customers use when searching. Descriptions might focus on brand jargon or internal codes rather than customer needs.\n", "* **Outdated or Inaccurate Information:** Product specifications, pricing, and availability change frequently. Keeping all information up-to-date is a constant battle.\n", "* **Poorly Written Descriptions:** Descriptions are sometimes generic, poorly written, or even auto-generated, failing to highlight key benefits and features that drive purchase decisions.\n", "* **Difficulty A/B Testing at Scale:** Manually creating and testing different product descriptions to optimize conversion rates is time-consuming and impractical for vast product catalogs.\n", "* **Multilingual Challenges:** Companies selling internationally need to translate and localize product information, introducing further potential for inconsistency and errors.\n", "\n", "**Why this is a significant pain point:**\n", "\n", "* **Reduced Search Visibility:** Poorly optimized product descriptions mean products are less likely to appear in relevant search results (both on-site and on search engines like Google).\n", "* **Lower Conversion Rates:** Vague or incomplete product information leads to buyer uncertainty and reduced purchase rates. Customers are more likely to abandon their carts if they can't find the information they need.\n", "* **Increased Customer Service Costs:** Customers contact support to ask questions that should have been answered in the product description, increasing operational costs.\n", "* **Damaged Brand Reputation:** Inaccurate or outdated product information can damage a company's reputation and erode customer trust.\n", "* **Lost Revenue:** Ultimately, all these factors contribute to lost revenue opportunities.\n", "\n", "**Why this is ripe for an Agentic Solution:**\n", "\n", "This pain point is ideal for an agentic solution because it requires:\n", "\n", "* **Large-scale data processing and analysis:** Analyzing vast amounts of product data from diverse sources.\n", "* **Natural Language Processing (NLP):** Understanding and generating high-quality product descriptions.\n", "* **Continuous learning and adaptation:** Adapting to changing market trends, customer search behavior, and product updates.\n", "* **Autonomous decision-making:** Making recommendations for product description improvements, keyword optimization, and A/B testing.\n", "* **Integration with existing systems:** Seamlessly integrating with product information management (PIM) systems, e-commerce platforms, and analytics dashboards.\n", "\n", "**In essence, the e-commerce industry needs an intelligent agent (or team of agents) that can automatically analyze, optimize, and maintain product information at scale, freeing up human employees to focus on more strategic tasks.**\n", "\n" ] } ], "source": [ "messages = \"ppresent a pain-point in that industry - something challenging that might be ripe for an Agentic solution.\"\n", "response = client.models.generate_content(\n", " model=\"gemini-2.0-flash\",\n", " contents=messages\n", ")\n", "pain_point = response.text\n", "print(\"pain-point: \", pain_point)" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "ename": "ClientError", "evalue": "429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'Resource exhausted. Please try again later. Please refer to https://cloud.google.com/vertex-ai/generative-ai/docs/error-code-429 for more details.', 'status': 'RESOURCE_EXHAUSTED'}}", "output_type": "error", "traceback": [ "\u001b[31m---------------------------------------------------------------------------\u001b[39m", "\u001b[31mClientError\u001b[39m Traceback (most recent call last)", "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[36]\u001b[39m\u001b[32m, line 2\u001b[39m\n\u001b[32m 1\u001b[39m messages = \u001b[33m\"\u001b[39m\u001b[33mpropose the Agentic AI solution for this \u001b[39m\u001b[33m\"\u001b[39m + pain_point\n\u001b[32m----> \u001b[39m\u001b[32m2\u001b[39m response = \u001b[43mclient\u001b[49m\u001b[43m.\u001b[49m\u001b[43mmodels\u001b[49m\u001b[43m.\u001b[49m\u001b[43mgenerate_content\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 3\u001b[39m \u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m=\u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mgemini-2.0-flash\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[32m 4\u001b[39m \u001b[43m \u001b[49m\u001b[43mcontents\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmessages\u001b[49m\n\u001b[32m 5\u001b[39m \u001b[43m)\u001b[49m\n\u001b[32m 6\u001b[39m solution = response.text\n\u001b[32m 7\u001b[39m \u001b[38;5;28mprint\u001b[39m(\u001b[33m\"\u001b[39m\u001b[33msolution: \u001b[39m\u001b[33m\"\u001b[39m, solution)\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\google\\genai\\models.py:5056\u001b[39m, in \u001b[36mModels.generate_content\u001b[39m\u001b[34m(self, model, contents, config)\u001b[39m\n\u001b[32m 5054\u001b[39m \u001b[38;5;28;01mwhile\u001b[39;00m remaining_remote_calls_afc > \u001b[32m0\u001b[39m:\n\u001b[32m 5055\u001b[39m i += \u001b[32m1\u001b[39m\n\u001b[32m-> \u001b[39m\u001b[32m5056\u001b[39m response = \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_generate_content\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 5057\u001b[39m \u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcontents\u001b[49m\u001b[43m=\u001b[49m\u001b[43mcontents\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mconfig\u001b[49m\u001b[43m=\u001b[49m\u001b[43mparsed_config\u001b[49m\n\u001b[32m 5058\u001b[39m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 5060\u001b[39m function_map = _extra_utils.get_function_map(parsed_config)\n\u001b[32m 5061\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m function_map:\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\google\\genai\\models.py:3843\u001b[39m, in \u001b[36mModels._generate_content\u001b[39m\u001b[34m(self, model, contents, config)\u001b[39m\n\u001b[32m 3840\u001b[39m request_dict = _common.convert_to_dict(request_dict)\n\u001b[32m 3841\u001b[39m request_dict = _common.encode_unserializable_types(request_dict)\n\u001b[32m-> \u001b[39m\u001b[32m3843\u001b[39m response = \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_api_client\u001b[49m\u001b[43m.\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 3844\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m'\u001b[39;49m\u001b[33;43mpost\u001b[39;49m\u001b[33;43m'\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mrequest_dict\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mhttp_options\u001b[49m\n\u001b[32m 3845\u001b[39m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 3847\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m config \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;28mgetattr\u001b[39m(\n\u001b[32m 3848\u001b[39m config, \u001b[33m'\u001b[39m\u001b[33mshould_return_http_response\u001b[39m\u001b[33m'\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[32m 3849\u001b[39m ):\n\u001b[32m 3850\u001b[39m return_value = types.GenerateContentResponse(sdk_http_response=response)\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\google\\genai\\_api_client.py:1331\u001b[39m, in \u001b[36mBaseApiClient.request\u001b[39m\u001b[34m(self, http_method, path, request_dict, http_options)\u001b[39m\n\u001b[32m 1321\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mrequest\u001b[39m(\n\u001b[32m 1322\u001b[39m \u001b[38;5;28mself\u001b[39m,\n\u001b[32m 1323\u001b[39m http_method: \u001b[38;5;28mstr\u001b[39m,\n\u001b[32m (...)\u001b[39m\u001b[32m 1326\u001b[39m http_options: Optional[HttpOptionsOrDict] = \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[32m 1327\u001b[39m ) -> SdkHttpResponse:\n\u001b[32m 1328\u001b[39m http_request = \u001b[38;5;28mself\u001b[39m._build_request(\n\u001b[32m 1329\u001b[39m http_method, path, request_dict, http_options\n\u001b[32m 1330\u001b[39m )\n\u001b[32m-> \u001b[39m\u001b[32m1331\u001b[39m response = \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mhttp_request\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mhttp_options\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m=\u001b[49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m)\u001b[49m\n\u001b[32m 1332\u001b[39m response_body = (\n\u001b[32m 1333\u001b[39m response.response_stream[\u001b[32m0\u001b[39m] \u001b[38;5;28;01mif\u001b[39;00m response.response_stream \u001b[38;5;28;01melse\u001b[39;00m \u001b[33m'\u001b[39m\u001b[33m'\u001b[39m\n\u001b[32m 1334\u001b[39m )\n\u001b[32m 1335\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m SdkHttpResponse(headers=response.headers, body=response_body)\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\google\\genai\\_api_client.py:1167\u001b[39m, in \u001b[36mBaseApiClient._request\u001b[39m\u001b[34m(self, http_request, http_options, stream)\u001b[39m\n\u001b[32m 1164\u001b[39m retry = tenacity.Retrying(**retry_kwargs)\n\u001b[32m 1165\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m retry(\u001b[38;5;28mself\u001b[39m._request_once, http_request, stream) \u001b[38;5;66;03m# type: ignore[no-any-return]\u001b[39;00m\n\u001b[32m-> \u001b[39m\u001b[32m1167\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_retry\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_request_once\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mhttp_request\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m)\u001b[49m\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\tenacity\\__init__.py:477\u001b[39m, in \u001b[36mRetrying.__call__\u001b[39m\u001b[34m(self, fn, *args, **kwargs)\u001b[39m\n\u001b[32m 475\u001b[39m retry_state = RetryCallState(retry_object=\u001b[38;5;28mself\u001b[39m, fn=fn, args=args, kwargs=kwargs)\n\u001b[32m 476\u001b[39m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[32m--> \u001b[39m\u001b[32m477\u001b[39m do = \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43miter\u001b[49m\u001b[43m(\u001b[49m\u001b[43mretry_state\u001b[49m\u001b[43m=\u001b[49m\u001b[43mretry_state\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 478\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(do, DoAttempt):\n\u001b[32m 479\u001b[39m \u001b[38;5;28;01mtry\u001b[39;00m:\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\tenacity\\__init__.py:378\u001b[39m, in \u001b[36mBaseRetrying.iter\u001b[39m\u001b[34m(self, retry_state)\u001b[39m\n\u001b[32m 376\u001b[39m result = \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[32m 377\u001b[39m \u001b[38;5;28;01mfor\u001b[39;00m action \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mself\u001b[39m.iter_state.actions:\n\u001b[32m--> \u001b[39m\u001b[32m378\u001b[39m result = \u001b[43maction\u001b[49m\u001b[43m(\u001b[49m\u001b[43mretry_state\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 379\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m result\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\tenacity\\__init__.py:420\u001b[39m, in \u001b[36mBaseRetrying._post_stop_check_actions..exc_check\u001b[39m\u001b[34m(rs)\u001b[39m\n\u001b[32m 418\u001b[39m retry_exc = \u001b[38;5;28mself\u001b[39m.retry_error_cls(fut)\n\u001b[32m 419\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m.reraise:\n\u001b[32m--> \u001b[39m\u001b[32m420\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[43mretry_exc\u001b[49m\u001b[43m.\u001b[49m\u001b[43mreraise\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 421\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m retry_exc \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mfut\u001b[39;00m\u001b[34;01m.\u001b[39;00m\u001b[34;01mexception\u001b[39;00m()\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\tenacity\\__init__.py:187\u001b[39m, in \u001b[36mRetryError.reraise\u001b[39m\u001b[34m(self)\u001b[39m\n\u001b[32m 185\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mreraise\u001b[39m(\u001b[38;5;28mself\u001b[39m) -> t.NoReturn:\n\u001b[32m 186\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m.last_attempt.failed:\n\u001b[32m--> \u001b[39m\u001b[32m187\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43mlast_attempt\u001b[49m\u001b[43m.\u001b[49m\u001b[43mresult\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 188\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;28mself\u001b[39m\n", "\u001b[36mFile \u001b[39m\u001b[32m~\\AppData\\Roaming\\uv\\python\\cpython-3.12.12-windows-x86_64-none\\Lib\\concurrent\\futures\\_base.py:449\u001b[39m, in \u001b[36mFuture.result\u001b[39m\u001b[34m(self, timeout)\u001b[39m\n\u001b[32m 447\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m CancelledError()\n\u001b[32m 448\u001b[39m \u001b[38;5;28;01melif\u001b[39;00m \u001b[38;5;28mself\u001b[39m._state == FINISHED:\n\u001b[32m--> \u001b[39m\u001b[32m449\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m__get_result\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 451\u001b[39m \u001b[38;5;28mself\u001b[39m._condition.wait(timeout)\n\u001b[32m 453\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m._state \u001b[38;5;129;01min\u001b[39;00m [CANCELLED, CANCELLED_AND_NOTIFIED]:\n", "\u001b[36mFile \u001b[39m\u001b[32m~\\AppData\\Roaming\\uv\\python\\cpython-3.12.12-windows-x86_64-none\\Lib\\concurrent\\futures\\_base.py:401\u001b[39m, in \u001b[36mFuture.__get_result\u001b[39m\u001b[34m(self)\u001b[39m\n\u001b[32m 399\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m._exception:\n\u001b[32m 400\u001b[39m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[32m--> \u001b[39m\u001b[32m401\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;28mself\u001b[39m._exception\n\u001b[32m 402\u001b[39m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[32m 403\u001b[39m \u001b[38;5;66;03m# Break a reference cycle with the exception in self._exception\u001b[39;00m\n\u001b[32m 404\u001b[39m \u001b[38;5;28mself\u001b[39m = \u001b[38;5;28;01mNone\u001b[39;00m\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\tenacity\\__init__.py:480\u001b[39m, in \u001b[36mRetrying.__call__\u001b[39m\u001b[34m(self, fn, *args, **kwargs)\u001b[39m\n\u001b[32m 478\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(do, DoAttempt):\n\u001b[32m 479\u001b[39m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[32m--> \u001b[39m\u001b[32m480\u001b[39m result = \u001b[43mfn\u001b[49m\u001b[43m(\u001b[49m\u001b[43m*\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m*\u001b[49m\u001b[43m*\u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 481\u001b[39m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mBaseException\u001b[39;00m: \u001b[38;5;66;03m# noqa: B902\u001b[39;00m\n\u001b[32m 482\u001b[39m retry_state.set_exception(sys.exc_info()) \u001b[38;5;66;03m# type: ignore[arg-type]\u001b[39;00m\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\google\\genai\\_api_client.py:1144\u001b[39m, in \u001b[36mBaseApiClient._request_once\u001b[39m\u001b[34m(self, http_request, stream)\u001b[39m\n\u001b[32m 1136\u001b[39m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[32m 1137\u001b[39m response = \u001b[38;5;28mself\u001b[39m._httpx_client.request(\n\u001b[32m 1138\u001b[39m method=http_request.method,\n\u001b[32m 1139\u001b[39m url=http_request.url,\n\u001b[32m (...)\u001b[39m\u001b[32m 1142\u001b[39m timeout=http_request.timeout,\n\u001b[32m 1143\u001b[39m )\n\u001b[32m-> \u001b[39m\u001b[32m1144\u001b[39m \u001b[43merrors\u001b[49m\u001b[43m.\u001b[49m\u001b[43mAPIError\u001b[49m\u001b[43m.\u001b[49m\u001b[43mraise_for_response\u001b[49m\u001b[43m(\u001b[49m\u001b[43mresponse\u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 1145\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m HttpResponse(\n\u001b[32m 1146\u001b[39m response.headers, response \u001b[38;5;28;01mif\u001b[39;00m stream \u001b[38;5;28;01melse\u001b[39;00m [response.text]\n\u001b[32m 1147\u001b[39m )\n", "\u001b[36mFile \u001b[39m\u001b[32mc:\\Users\\haoma\\source\\repos\\ai-agent\\agents\\.venv\\Lib\\site-packages\\google\\genai\\errors.py:108\u001b[39m, in \u001b[36mAPIError.raise_for_response\u001b[39m\u001b[34m(cls, response)\u001b[39m\n\u001b[32m 106\u001b[39m status_code = response.status_code\n\u001b[32m 107\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[32m400\u001b[39m <= status_code < \u001b[32m500\u001b[39m:\n\u001b[32m--> \u001b[39m\u001b[32m108\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m ClientError(status_code, response_json, response)\n\u001b[32m 109\u001b[39m \u001b[38;5;28;01melif\u001b[39;00m \u001b[32m500\u001b[39m <= status_code < \u001b[32m600\u001b[39m:\n\u001b[32m 110\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m ServerError(status_code, response_json, response)\n", "\u001b[31mClientError\u001b[39m: 429 RESOURCE_EXHAUSTED. {'error': {'code': 429, 'message': 'Resource exhausted. Please try again later. Please refer to https://cloud.google.com/vertex-ai/generative-ai/docs/error-code-429 for more details.', 'status': 'RESOURCE_EXHAUSTED'}}" ] } ], "source": [ "messages = \"propose the Agentic AI solution for this \" + pain_point\n", "response = client.models.generate_content(\n", " model=\"gemini-2.0-flash\",\n", " contents=messages\n", ")\n", "solution = response.text\n", "print(\"solution: \", solution)" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.12" } }, "nbformat": 4, "nbformat_minor": 2 }