id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
59
127
e2c8137bb4fb-0
.rst .pdf Deploying LLMs in Production Contents Outline Designing a Robust LLM Application Service Monitoring Fault tolerance Zero down time upgrade Load balancing Maintaining Cost-Efficiency and Scalability Self-hosting models Resource Management and Auto-Scaling Utilizing Spot Instances Independent Scaling Batching requests Ensuring Rapid Iteration Model composition Cloud providers Infrastructure as Code (IaC) CI/CD Deploying LLMs in Production# In today’s fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it’s crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories: Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc. Case 2: Self-hosted Open-Source ModelsAlternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers. Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It’s vital to understand the trade-offs and key considerations when evaluating serving frameworks. Outline# This guide aims to provide a comprehensive overview of the requirements for deploying LLMs in a production setting, focusing on: Designing a Robust LLM Application Service Maintaining Cost-Efficiency Ensuring Rapid Iteration
rtdocs_stable/api.python.langchain.com/en/stable/additional_resources/deploy_llms.html
e2c8137bb4fb-1
Maintaining Cost-Efficiency Ensuring Rapid Iteration Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include: Ray Serve BentoML Modal These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs. Designing a Robust LLM Application Service# When deploying an LLM service in production, it’s imperative to provide a seamless user experience free from outages. Achieving 24/7 service availability involves creating and maintaining several sub-systems surrounding your application. Monitoring# Monitoring forms an integral part of any system running in a production environment. In the context of LLMs, it is essential to monitor both performance and quality metrics. Performance Metrics: These metrics provide insights into the efficiency and capacity of your model. Here are some key examples: Query per second (QPS): This measures the number of queries your model processes in a second, offering insights into its utilization. Latency: This metric quantifies the delay from when your client sends a request to when they receive a response. Tokens Per Second (TPS): This represents the number of tokens your model can generate in a second. Quality Metrics: These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated offline, you need to log the necessary data to use them later. Fault tolerance#
rtdocs_stable/api.python.langchain.com/en/stable/additional_resources/deploy_llms.html
e2c8137bb4fb-2
Fault tolerance# Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren’t the only potential points of failure. It’s essential to build resilience against various failures that could occur at any point in your stack. Zero down time upgrade# System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process. Load balancing# Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested.
rtdocs_stable/api.python.langchain.com/en/stable/additional_resources/deploy_llms.html
e2c8137bb4fb-3
There are several strategies for load balancing. For example, one common method is the Round Robin strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a Weighted Round Robin or Least Connections strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let’s imagine you’re running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable. Maintaining Cost-Efficiency and Scalability# Deploying LLM services can be costly, especially when you’re handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service. Self-hosting models# Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines. Resource Management and Auto-Scaling#
rtdocs_stable/api.python.langchain.com/en/stable/additional_resources/deploy_llms.html
e2c8137bb4fb-4
Resource Management and Auto-Scaling# Computational logic within your application requires precise resource allocation. For instance, if part of your traffic is served by an OpenAI endpoint and another part by a self-hosted model, it’s crucial to allocate suitable resources for each. Auto-scaling—adjusting resource allocation based on traffic—can significantly impact the cost of running your application. This strategy requires a balance between cost and responsiveness, ensuring neither resource over-provisioning nor compromised application responsiveness. Utilizing Spot Instances# On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use. Independent Scaling# When self-hosting your models, you should consider independent scaling. For example, if you have two translation models, one fine-tuned for French and another for Spanish, incoming requests might necessitate different scaling requirements for each. Batching requests# In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it’s only working on a single task at a time. On the other hand, by batching requests together, you’re allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service. In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities. Ensuring Rapid Iteration#
rtdocs_stable/api.python.langchain.com/en/stable/additional_resources/deploy_llms.html
e2c8137bb4fb-5
Ensuring Rapid Iteration# The LLM landscape is evolving at an unprecedented pace, with new libraries and model architectures being introduced constantly. Consequently, it’s crucial to avoid tying yourself to a solution specific to one particular framework. This is especially relevant in serving, where changes to your infrastructure can be time-consuming, expensive, and risky. Strive for infrastructure that is not locked into any specific machine learning library or framework, but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role: Model composition# Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together. Cloud providers# Many hosted solutions are restricted to a single cloud provider, which can limit your options in today’s multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider. Infrastructure as Code (IaC)# Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations. CI/CD#
rtdocs_stable/api.python.langchain.com/en/stable/additional_resources/deploy_llms.html
e2c8137bb4fb-6
CI/CD# In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration. previous Deployments next Tracing Contents Outline Designing a Robust LLM Application Service Monitoring Fault tolerance Zero down time upgrade Load balancing Maintaining Cost-Efficiency and Scalability Self-hosting models Resource Management and Auto-Scaling Utilizing Spot Instances Independent Scaling Batching requests Ensuring Rapid Iteration Model composition Cloud providers Infrastructure as Code (IaC) CI/CD By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/additional_resources/deploy_llms.html
8f892b7df843-0
.rst .pdf Agents Contents Action Agents Plan-and-Execute Agents Agents# Note Conceptual Guide Some applications require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user’s input. In these types of chains, there is an agent which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call. At the moment, there are two main types of agents: Action Agents: these agents decide the actions to take and execute that actions one action at a time. Plan-and-Execute Agents: these agents first decide a plan of actions to take, and then execute those actions one at a time. When should you use each one? Action Agents are more conventional, and good for small tasks. For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus. However, that comes at the expense of generally more calls and higher latency. These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge of the execution for the Plan and Execute agent. Action Agents# High level pseudocode of the Action Agents: The user input is received The agent decides which tool - if any - to use, and what the tool input should be That tool is then called with the tool input, and an observation is recorded (the output of this calling) That history of tool, tool input, and observation is passed back into the agent, and it decides the next step This is repeated until the agent decides it no longer needs to use a tool, and then it responds directly to the user. The different abstractions involved in agents are:
rtdocs_stable/api.python.langchain.com/en/stable/modules/agents.html
8f892b7df843-1
The different abstractions involved in agents are: Agent: this is where the logic of the application lives. Agents expose an interface that takes in user input along with a list of previous steps the agent has taken, and returns either an AgentAction or AgentFinish AgentAction corresponds to the tool to use and the input to that tool AgentFinish means the agent is done, and has information around what to return to the user Tools: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do Toolkits: these are groups of tools designed for a specific use case. For example, in order for an agent to interact with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables. Agent Executor: this wraps an agent and a list of tools. This is responsible for the loop of running the agent iteratively until the stopping criteria is met. Getting Started: An overview of agents. It covers how to use all things related to agents in an end-to-end manner. Agent Construction: Although an agent can be constructed in many way, the typical way to construct an agent is with: PromptTemplate: this is responsible for taking the user input and previous steps and constructing a prompt to send to the language model Language Model: this takes the prompt constructed by the PromptTemplate and returns some output Output Parser: this takes the output of the Language Model and parses it into an AgentAction or AgentFinish object. Additional Documentation: Tools: Different types of tools LangChain supports natively. We also cover how to add your own tools. Agents: Different types of agents LangChain supports natively. We also cover how to modify and create your own agents. Toolkits: Various toolkits that LangChain supports out of the box, and how to create an agent from them.
rtdocs_stable/api.python.langchain.com/en/stable/modules/agents.html
8f892b7df843-2
create an agent from them. Agent Executor: The Agent Executor class, which is responsible for calling the agent and tools in a loop. We go over different ways to customize this, and options you can use for more control. Plan-and-Execute Agents# High level pseudocode of the Plan-and-Execute Agents: The user input is received The planner lists out the steps to take The executor goes through the list of steps, executing them The most typical implementation is to have the planner be a language model, and the executor be an action agent. Plan-and-Execute Agents previous Chains next Getting Started Contents Action Agents Plan-and-Execute Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/agents.html
89ec7d8248ca-0
.rst .pdf Models Contents Model Types Models# Note Conceptual Guide This section of the documentation deals with different types of models that are used in LangChain. On this page we will go over the model types at a high level, but we have individual pages for each model type. The pages contain more detailed “how-to” guides for working with that model, as well as a list of different model providers. Getting Started: An overview of the models. Model Types# LLMs: Large Language Models (LLMs) take a text string as input and return a text string as output. Chat Models: Chat Models are usually backed by a language model, but their APIs are more structured. Specifically, these models take a list of Chat Messages as input, and return a Chat Message. Text Embedding Models: Text embedding models take text as input and return a list of floats. previous Tutorials next Getting Started Contents Model Types By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/models.html
1ca5f7aa0a34-0
.rst .pdf Chains Chains# Note Conceptual Guide Using an LLM in isolation is fine for some simple applications, but more complex applications require chaining LLMs - either with each other or with other experts. LangChain provides a standard interface for Chains, as well as several common implementations of chains. Getting Started: An overview of chains. How-To Guides: How-to guides about various types of chains. Reference: API reference documentation for all Chain classes. previous Zep next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains.html
55336fc52db9-0
.rst .pdf Memory Memory# Note Conceptual Guide By default, Chains and Agents are stateless, meaning that they treat each incoming query independently (as are the underlying LLMs and chat models). In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions, both at a short term but also at a long term level. The Memory does exactly that. LangChain provides memory components in two forms. First, LangChain provides helper utilities for managing and manipulating previous chat messages. These are designed to be modular and useful regardless of how they are used. Secondly, LangChain provides easy ways to incorporate these utilities into chains. Getting Started: An overview of different types of memory. How-To Guides: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains. previous Structured Output Parser next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/memory.html
8016761ebd2e-0
.rst .pdf Prompts Prompts# Note Conceptual Guide The new way of programming models is through prompts. A prompt refers to the input to the model. This input is often constructed from multiple components. A PromptTemplate is responsible for the construction of this input. LangChain provides several classes and functions to make constructing and working with prompts easy. Getting Started: An overview of the prompts. LLM Prompt Templates: How to use PromptTemplates to prompt Language Models. Chat Prompt Templates: How to use PromptTemplates to prompt Chat Models. Example Selectors: Often times it is useful to include examples in prompts. These examples can be dynamically selected. This section goes over example selection. Output Parsers: Language models (and Chat Models) output text. But many times you may want to get more structured information. This is where output parsers come in. Output Parsers: instruct the model how output should be formatted, parse output into the desired formatting (including retrying if necessary). previous Tensorflow Hub next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/prompts.html
8c2d4fab34e6-0
.rst .pdf Indexes Contents Index Types Indexes# Note Conceptual Guide Indexes refer to ways to structure documents so that LLMs can best interact with them. The most common way that indexes are used in chains is in a “retrieval” step. This step refers to taking a user’s query and returning the most relevant documents. We draw this distinction because (1) an index can be used for other things besides retrieval, and (2) retrieval can use other logic besides an index to find relevant documents. We therefore have a concept of a Retriever interface - this is the interface that most chains work with. Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving unstructured data (like text documents). For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case sections for links to relevant functionality. Getting Started: An overview of the indexes. Index Types# Document Loaders: How to load documents from a variety of sources. Text Splitters: An overview and different types of the Text Splitters. VectorStores: An overview and different types of the Vector Stores. Retrievers: An overview and different types of the Retrievers. previous Zep Memory next Getting Started Contents Index Types By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes.html
a6a3b8384081-0
.rst .pdf Document Loaders Contents Transform loaders Public dataset or service loaders Proprietary dataset or service loaders Document Loaders# Note Conceptual Guide Combining language models with your own text data is a powerful way to differentiate them. The first step in doing this is to load the data into “Documents” - a fancy way of say some pieces of text. The document loader is aimed at making this easy. The following document loaders are provided: Transform loaders# These transform loaders transform data from a specific format into the Document format. For example, there are transformers for CSV and SQL. Mostly, these loaders input data from files but sometime from URLs. A primary driver of a lot of these transformers is the Unstructured python package. This package transforms many types of files - text, powerpoint, images, html, pdf, etc - into text data. For detailed instructions on how to get set up with Unstructured, see installation guidelines here. Airtable OpenAIWhisperParser CoNLL-U Copy Paste CSV Email EPub EverNote Microsoft Excel Facebook Chat File Directory HTML Images Jupyter Notebook JSON Markdown Microsoft PowerPoint Microsoft Word Open Document Format (ODT) Pandas DataFrame PDF Sitemap Subtitle Telegram TOML Unstructured File URL Selenium URL Loader Playwright URL Loader WebBaseLoader Weather WhatsApp Chat Public dataset or service loaders# These datasets and sources are created for public domain and we use queries to search there and download necessary documents. For example, Hacker News service. We don’t need any access permissions to these datasets and services. Arxiv AZLyrics BiliBili College Confidential Gutenberg Hacker News HuggingFace dataset iFixit
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders.html
a6a3b8384081-1
College Confidential Gutenberg Hacker News HuggingFace dataset iFixit IMSDb MediaWikiDump Wikipedia YouTube transcripts Proprietary dataset or service loaders# These datasets and services are not from the public domain. These loaders mostly transform data from specific formats of applications or cloud services, for example Google Drive. We need access tokens and sometime other parameters to get access to these datasets and services. Airbyte JSON Apify Dataset AWS S3 Directory AWS S3 File Azure Blob Storage Container Azure Blob Storage File Blackboard Blockchain ChatGPT Data Confluence Examples Diffbot Docugami DuckDB Fauna Figma GitBook Git Google BigQuery Google Cloud Storage Directory Google Cloud Storage File Google Drive Image captions Iugu Joplin Microsoft OneDrive Modern Treasury Notion DB 2/2 Notion DB 1/2 Obsidian Psychic PySpark DataFrame Loader ReadTheDocs Documentation Reddit Roam Slack Snowflake Spreedly Stripe 2Markdown Twitter previous Getting Started next Airtable Contents Transform loaders Public dataset or service loaders Proprietary dataset or service loaders By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders.html
a26177cb557c-0
.rst .pdf Vectorstores Vectorstores# Note Conceptual Guide Vectorstores are one of the most important components of building indexes. For an introduction to vectorstores and generic functionality see: Getting Started We also have documentation for all the types of vectorstores that are supported. Please see below for that list. AnalyticDB Annoy Atlas AwaDB Azure Cognitive Search Install Azure Cognitive Search SDK Chroma ClickHouse Vector Search Deep Lake DocArrayHnswSearch DocArrayInMemorySearch ElasticSearch ElasticVectorSearch class ElasticKnnSearch Class FAISS Hologres LanceDB MatchingEngine Milvus MyScale OpenSearch PGVector Pinecone Qdrant Redis SingleStoreDB vector search SKLearnVectorStore Supabase (Postgres) Tair Tigris Typesense Vectara Weaviate Persistance Retriever options Zilliz previous tiktoken (OpenAI) tokenizer next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores.html
7f82c2ee76a0-0
.rst .pdf Text Splitters Text Splitters# Note Conceptual Guide When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text. This notebook showcases several ways to do that. At a high level, text splitters work as following: Split the text up into small, semantically meaningful chunks (often sentences). Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function). Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks). That means there are two different axes along which you can customize your text splitter: How the text is split How the chunk size is measured For an introduction to the default text splitter and generic functionality see: Getting Started Usage examples for the text splitters: Character Code (including HTML, Markdown, Latex, Python, etc) NLTK Recursive Character spaCy tiktoken (OpenAI) Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use tokenizers to count the number of tokens in the text. We use this number inside the ..TextSplitter classes. This implemented as the from_<tokenizer> methods of the ..TextSplitter classes: Hugging Face tokenizer tiktoken (OpenAI) tokenizer previous Twitter next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/text_splitters.html
7f82c2ee76a0-1
Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/text_splitters.html
f7f81f82fe86-0
.rst .pdf Retrievers Retrievers# Note Conceptual Guide The retriever interface is a generic interface that makes it easy to combine documents with language models. This interface exposes a get_relevant_documents method which takes in a query (a string) and returns a list of documents. Please see below for a list of all the retrievers supported. Arxiv AWS Kendra Azure Cognitive Search ChatGPT Plugin Self-querying with Chroma Cohere Reranker Contextual Compression Stringing compressors and document transformers together Databerry ElasticSearch BM25 kNN LOTR (Merger Retriever) Metal Pinecone Hybrid Search PubMed Retriever Self-querying with Qdrant Self-querying SVM TF-IDF Time Weighted VectorStore VectorStore Vespa Weaviate Hybrid Search Self-querying with Weaviate Wikipedia Zep previous Zilliz next Arxiv By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers.html
a725427396f5-0
.ipynb .pdf Getting Started Contents One Line Index Creation Walkthrough Getting Started# LangChain primarily focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows: from abc import ABC, abstractmethod from typing import List from langchain.schema import Document class BaseRetriever(ABC): @abstractmethod def get_relevant_documents(self, query: str) -> List[Document]: """Get texts relevant for a query. Args: query: string to find relevant texts for Returns: List of relevant documents """ It’s that simple! The get_relevant_documents method can be implemented however you see fit. Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide. In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that. By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb. pip install chromadb This example showcases question answering over documents. We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain. Question answering over documents consists of four steps: Create an index Create a Retriever from that index Create a question answering chain Ask questions!
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/getting_started.html
a725427396f5-1
Create a Retriever from that index Create a question answering chain Ask questions! Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on. First, let’s import some common classes we’ll use no matter what. from langchain.chains import RetrievalQA from langchain.llms import OpenAI Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here from langchain.document_loaders import TextLoader loader = TextLoader('../state_of_the_union.txt', encoding='utf8') One Line Index Creation# To get started as quickly as possible, we can use the VectorstoreIndexCreator. from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide. query = "What did the president say about Ketanji Brown Jackson" index.query(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." query = "What did the president say about Ketanji Brown Jackson" index.query_with_sources(query)
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/getting_started.html
a725427396f5-2
index.query_with_sources(query) {'question': 'What did the president say about Ketanji Brown Jackson', 'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n", 'sources': '../state_of_the_union.txt'} What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that. index.vectorstore <langchain.vectorstores.chroma.Chroma at 0x119aa5940> If we then want to access the VectorstoreRetriever, we can do that with: index.vectorstore.as_retriever() VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={}) Walkthrough# Okay, so what’s actually going on? How is this index getting created? A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing? There are three main steps going on after the documents are loaded: Splitting documents into chunks Creating embeddings for each document Storing documents and embeddings in a vectorstore Let’s walk through this in code documents = loader.load() Next, we will split the documents into chunks. from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) We will then select which embeddings we want to use.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/getting_started.html
a725427396f5-3
We will then select which embeddings we want to use. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() We now create the vectorstore to use as the index. from langchain.vectorstores import Chroma db = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. So that’s creating the index. Then, we expose this index in a retriever interface. retriever = db.as_retriever() Then, as before, we create a chain and use it to answer questions! qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans." VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below: index_creator = VectorstoreIndexCreator( vectorstore_cls=Chroma, embedding=OpenAIEmbeddings(), text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) )
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/getting_started.html
a725427396f5-4
) Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood. previous Indexes next Document Loaders Contents One Line Index Creation Walkthrough By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/getting_started.html
71592a034aad-0
.ipynb .pdf URL Contents URL Selenium URL Loader Setup Playwright URL Loader Setup URL# This covers how to load HTML documents from a list of URLs into a document format that we can use downstream. from langchain.document_loaders import UnstructuredURLLoader urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023" ] loader = UnstructuredURLLoader(urls=urls) data = loader.load() Selenium URL Loader# This covers how to load HTML documents from a list of URLs using the SeleniumURLLoader. Using selenium allows us to load pages that require JavaScript to render. Setup# To use the SeleniumURLLoader, you will need to install selenium and unstructured. from langchain.document_loaders import SeleniumURLLoader urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8" ] loader = SeleniumURLLoader(urls=urls) data = loader.load() Playwright URL Loader# This covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader. As in the Selenium case, Playwright allows us to load pages that need JavaScript to render. Setup# To use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser: # Install playwright !pip install "playwright" !pip install "unstructured" !playwright install
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/url.html
71592a034aad-1
!pip install "unstructured" !playwright install from langchain.document_loaders import PlaywrightURLLoader urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8" ] loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"]) data = loader.load() previous Unstructured File next WebBaseLoader Contents URL Selenium URL Loader Setup Playwright URL Loader Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/url.html
0f1176b90497-0
.ipynb .pdf Confluence Contents Confluence Examples Username and Password or Username and API Token (Atlassian Cloud only) Personal Access Token (Server/On-Prem only) Confluence# Confluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities. A loader for Confluence pages. This currently supports username/api_key, Oauth2 login. Additionally, on-prem installations also support token authentication. Specify a list page_id-s and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed: #!pip install atlassian-python-api Examples# Username and Password or Username and API Token (Atlassian Cloud only)# This example authenticates using either a username and password or, if you’re connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token. You can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens. The limit parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/confluence.html
0f1176b90497-1
By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the max_pages parameter. Plese note the maximum value for the limit parameter in the atlassian-python-api package is currently 100. from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE", include_attachments=True, limit=50) Personal Access Token (Server/On-Prem only)# This method is valid for the Data Center/Server on-prem edition only. For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html. When using a PAT you provide only the token value, you cannot provide a username. Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", token="12345" ) documents = loader.load(space_key="SPACE", include_attachments=True, limit=50, max_pages=50) previous ChatGPT Data next Diffbot Contents Confluence Examples Username and Password or Username and API Token (Atlassian Cloud only) Personal Access Token (Server/On-Prem only) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/confluence.html
f8c90ea26461-0
.ipynb .pdf OpenAIWhisperParser OpenAIWhisperParser# This notebook goes over how to load data from an audio file, such as an mp3. We use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text. Note: You will need to have an OPENAI_API_KEY supplied. from langchain.document_loaders.generic import GenericLoader from langchain.document_loaders.parsers import OpenAIWhisperParser # Directory contains audio for the first 20 minutes of one Andrej Karpathy video # "The spelled-out intro to neural networks and backpropagation: building micrograd" # https://www.youtube.com/watch?v=VMj-3S1tku0 audio_file_path = "example_data/" loader = GenericLoader.from_filesystem(audio_file_path, glob="*.mp3", parser=OpenAIWhisperParser()) docs = loader.load() docs
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-1
[Document(page_content="Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I would like to take you through building of micrograd. Now micrograd is this library that I released on GitHub about two years ago but at the time I only uploaded the source code and you'd have to go in by yourself and really figure out how it works. So in this lecture I will take you through it step by step and kind of comment on all the pieces of it. So what is micrograd and why is it interesting? Thank you. Micrograd is basically an autograd engine. Autograd is short for automatic gradient and really what it does is it implements back propagation. Now back propagation is this algorithm that allows you to efficiently evaluate the gradient of some kind of a loss function with respect to the weights of a neural network and what that allows us to do then is we can iteratively tune the weights of that neural network to minimize the loss function and therefore improve the accuracy of the network. So back propagation would be at the mathematical core of any modern deep neural network library like say PyTorch or JAX. So the functionality of micrograd is I think best illustrated by an example. So if we just scroll down here you'll see that micrograd basically allows you to build out mathematical expressions and here what we are doing is we have an expression that we're building out where you have two inputs a and b and you'll see that a and b are negative four and two but we are wrapping those values into
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-2
and you'll see that a and b are negative four and two but we are wrapping those values into this value object that we are going to build out as part of micrograd. So this value object will wrap the numbers themselves and then we are going to build out a mathematical expression here where a and b are transformed into c d and eventually e f and g and I'm showing some of the functionality of micrograd and the operations that it supports. So you can add two value objects, you can multiply them, you can raise them to a constant power, you can offset by one, negate, squash at zero, square, divide by constant, divide by it, etc. And so we're building out an expression graph with these two inputs a and b and we're creating an output value of g and micrograd will in the background build out this entire mathematical expression. So it will for example know that c is also a value, c was a result of an addition operation and the child nodes of c are a and b because the and it will maintain pointers to a and b value objects. So we'll basically know exactly how all of this is laid out and then not only can we do what we call the forward pass where we actually look at the value of g of course, that's pretty straightforward, we will access that using the dot data attribute and so the output of the forward pass, the value of g, is 24.7 it turns out. But the big deal is that we can also take this g value object and we can call dot backward and this will basically initialize backpropagation at the node g. And what backpropagation is going to do is it's going to start at g and it's going to go backwards through that expression graph and it's going to recursively apply the chain rule from calculus. And what that allows us to do then is we're going to evaluate basically the derivative of g with respect to all the internal nodes like e, d,
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-3
going to evaluate basically the derivative of g with respect to all the internal nodes like e, d, and c but also with respect to the inputs a and b. And then we can actually query this derivative of g with respect to a, for example that's a.grad, in this case it happens to be 138, and the derivative of g with respect to b which also happens to be here 645. And this derivative we'll see soon is very important information because it's telling us how a and b are affecting g through this mathematical expression. So in particular a.grad is 138, so if we slightly nudge a and make it slightly larger, 138 is telling us that g will grow and the slope of that growth is going to be 138 and the slope of growth of b is going to be 645. So that's going to tell us about how g will respond if a and b get tweaked a tiny amount in a positive direction. Now you might be confused about what this expression is that we built out here and this expression by the way is completely meaningless. I just made it up, I'm just flexing about the kinds of operations that are supported by micrograd. What we actually really care about are neural networks but it turns out that neural networks are just mathematical expressions just like this one but actually slightly a bit less crazy even. Neural networks are just a mathematical expression, they take the input data as an input and they take the weights of a neural network as an input and it's a mathematical expression and the output are your predictions of your neural net or the loss function, we'll see this in a bit. But basically neural networks just happen to be a certain class of mathematical expressions but back propagation is actually significantly more general. It doesn't actually care about neural networks at all, it only cares about arbitrary mathematical expressions and then we happen to use that machinery for training of neural networks. Now one more note I would like to make at this stage is
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-4
machinery for training of neural networks. Now one more note I would like to make at this stage is that as you see here micrograd is a scalar valued autograd engine so it's working on the you know level of individual scalars like negative 4 and 2 and we're taking neural nets and we're breaking them down all the way to these atoms of individual scalars and all the little pluses and times and it's just excessive and so obviously you would never be doing any of this in production. It's really just done for pedagogical reasons because it allows us to not have to deal with these n-dimensional tensors that you would use in modern deep neural network library. So this is really done so that you understand and refactor out back propagation and chain rule and understanding of neural training and then if you actually want to train bigger networks you have to be using these tensors but none of the math changes, this is done purely for efficiency. We are basically taking all the scalars all the scalar values we're packaging them up into tensors which are just arrays of these scalars and then because we have these large arrays we're making operations on those large arrays that allows us to take advantage of the parallelism in a computer and all those operations can be done in parallel and then the whole thing runs faster but really none of the math changes and they're done purely for efficiency so I don't think that it's pedagogically useful to be dealing with tensors from scratch and I think and that's why I fundamentally wrote micrograd because you can understand how things work at the fundamental level and then you can speed it up later. Okay so here's the fun part. My claim is that micrograd is what you need to train neural networks and everything else is just efficiency so you'd think that micrograd would be a very complex piece of code and that turns out to not be the case. So if we just go to micrograd and you'll see that there's only two files here
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-5
So if we just go to micrograd and you'll see that there's only two files here in micrograd. This is the actual engine, it doesn't know anything about neural nets and this is the entire neural nets library on top of micrograd. So engine and nn.py. So the actual back propagation autograd engine that gives you the power of neural networks is literally 100 lines of code of like very simple python which we'll understand by the end of this lecture and then nn.py, this neural network library built on top of the autograd engine is like a joke. It's like we have to define what is a neuron and then we have to define what is a layer of neurons and then we define what is a multilayer perceptron which is just a sequence of layers of neurons and so it's just a total joke. So basically there's a lot of power that comes from only 150 lines of code and that's all you need to understand to understand neural network training and everything else is just efficiency and of course there's a lot to efficiency but fundamentally that's all that's happening. Okay so now let's dive right in and implement micrograd step by step. The first thing I'd like to do is I'd like to make sure that you have a very good understanding intuitively of what a derivative is and exactly what information it gives you. So let's start with some basic imports that I copy-paste in every jupyter notebook always and let's define a function, a scalar valued function f of x as follows. So I just made this up randomly. I just wanted a scalar valued function that takes a single scalar x and returns a single scalar y and we can call this function of course so we can pass in say 3.0 and get 20 back. Now we can also plot this function to get a sense of its shape. You can tell from the mathematical expression that this is probably a parabola, it's a quadratic and
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-6
can tell from the mathematical expression that this is probably a parabola, it's a quadratic and so if we just create a set of scalar values that we can feed in using for example a range from negative 5 to 5 in steps of 0.25. So this is so x is just from negative 5 to 5 not including 5 in steps of 0.25 and we can actually call this function on this numpy array as well so we get a set of y's if we call f on x's and these y's are basically also applying the function on every one of these elements independently and we can plot this using matplotlib. So plt.plot x's and y's and we get a nice parabola. So previously here we fed in 3.0 somewhere here and we received 20 back which is here the y-coordinate. So now I'd like to think through what is the derivative of this function at any single input point x. So what is the derivative at different points x of this function? Now if you remember back to your calculus class you've probably derived derivatives so we take this mathematical expression 3x squared minus 4x plus 5 and you would write out on a piece of paper and you would apply the product rule and all the other rules and derive the mathematical expression of the great derivative of the original function and then you could plug in different texts and see what the derivative is. We're not going to actually do that because no one in neural networks actually writes out the expression for the neural net. It would be a massive expression, it would be thousands, tens of thousands of terms. No one actually derives the derivative of course and so we're not going to take this kind of like symbolic approach. Instead what I'd like to do is I'd like to look at the definition of derivative and just make sure that we really understand what the derivative is measuring, what it's telling you about the function. And so if
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-7
really understand what the derivative is measuring, what it's telling you about the function. And so if we just look up derivative we see that okay so this is not a very good definition of derivative. This is a definition of what it means to be differentiable but if you remember from your calculus it is the limit as h goes to zero of f of x plus h minus f of x over h. So basically what it's saying is if you slightly bump up your at some point x that you're interested in or a and if you slightly bump up you know you slightly increase it by small number h how does the function respond with what sensitivity does it respond where is the slope at that point does the function go up or does it go down and by how much and that's the slope of that function the the slope of that response at that point and so we can basically evaluate the derivative here numerically by taking a very small h of course the definition would ask us to take h to zero we're just going to pick a very small h 0.001 and let's say we're interested in 0.3.0 so we can look at f of x of course as 20 and now f of x plus h so if we slightly nudge x in a positive direction how is the function going to respond and just looking at this do you expand do you expect f of x plus h to be slightly greater than 20 or do you expect it to be slightly lower than 20 and since this 3 is here and this is 20 if we slightly go positively the function will respond positively so you'd expect this to be slightly greater than 20 and now by how much is telling you the sort of the the strength of that slope right the the size of the slope so f of x plus h minus f of x this is how much the function responded in a positive direction and we have to normalize by the run so we have the rise over run to get the slope so this
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-8
we have to normalize by the run so we have the rise over run to get the slope so this of course is just a numerical approximation of the slope because we have to make h very very small to converge to the exact amount now if i'm doing too many zeros at some point i'm going to i'm going to get an incorrect answer because we're using floating point arithmetic and the representations of all these numbers in computer memory is finite and at some point we get into trouble so we can converge towards the right answer with this approach but basically at 3 the slope is 14 and you can see that by taking 3x squared minus 4x plus 5 and differentiating it in our head so 3x squared would be 6x minus 4 and then we plug in x equals 3 so that's 18 minus 4 is 14 so this is correct so that's at 3 now how about the slope at say negative 3 would you expect what would you expect for the slope now telling the exact value is really hard but what is the sign of that slope so at negative 3 if we slightly go in the positive direction at x the function would actually go down and so that tells you that the slope would be negative so we'll get a slight number below below 20 and so if we take the slope we expect something negative negative 22 okay and at some point here of course the slope would be zero now for this specific function i looked it up previously and it's at point uh 2 over 3 so at roughly 2 over 3 that's somewhere here this this derivative would be zero so basically at that precise point yeah at that precise point if we nudge in a positive direction the function doesn't respond this stays the same almost and so that's why the slope is zero okay now let's look at a bit more complex case so we're going to start you know complexifying a bit so now we have a function here with output variable
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-9
going to start you know complexifying a bit so now we have a function here with output variable d that is a function of three scalar inputs a b and c so a b and c are some specific values three inputs into our expression graph and a single output d and so if we just print d we get four and now what i like to do is i'd like to again look at the derivatives of d with respect to a b and c and uh think through uh again just the intuition of what this derivative is telling us so in order to evaluate this derivative we're going to get a bit hacky here we're going to again have a very small value of h and then we're going to fix the inputs at some values that we're interested in so these are the this is the point a b c at which we're going to be evaluating the the derivative of d with respect to all a b and c at that point so there are the inputs and now we have d1 is that expression and then we're going to for example look at the derivative of d with respect to a so we'll take a and we'll bump it by h and then we'll get d2 to be the exact same function and now we're going to print um you know f1 d1 is d1 d2 is d2 and print slope so the derivative or slope here will be um of course d2 minus d1 divide h so d2 minus d1 is how much the function increased uh when we bumped the uh the specific input that we're interested in by a tiny amount and this is the normalized by this is the normalized by h to get the slope so um yeah so this so i just run this we're going to print d1 which we know is four now d2 will be bumped a will be bumped by h so let's just think through a little bit uh what d2 will be uh printed out here in particular d1 will be four will d2 be a number slightly greater than
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-10
uh printed out here in particular d1 will be four will d2 be a number slightly greater than four or slightly lower than four and that's going to tell us the sign of the derivative so we're bumping a by h b is minus three c is 10 so you can just intuitively think through this derivative and what it's doing a will be slightly more positive and but b is a negative number so if a is slightly more positive because b is negative three we're actually going to be adding less to d so you'd actually expect that the value of the function will go down so let's just see this yeah and so we went from four to 3.9996 and that tells you that the slope will be negative and then um will be a negative number because we went down and then the exact number of slope will be exact amount of slope is negative three and you can also convince yourself that negative three is the right answer um mathematically and analytically because if you have a times b plus c and you are you know you have calculus then uh differentiating a times b plus c with respect to a gives you just b and indeed the value of b is negative three which is the derivative that we have so you can tell that that's correct so now if we do this with b so if we bump b by a little bit in a positive direction we'd get different slopes so what is the influence of b on the output d so if we bump b by a tiny amount in a positive direction then because a is positive we'll be adding more to d right so um and now what is the what is the sensitivity what is the slope of that addition and it might not surprise you that this should be two and why is it two because d of d by db differentiating with respect to b would be would give us a and the value of a is two so that's also working well and then if c gets bumped a tiny amount in h by h then of course a times
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-11
working well and then if c gets bumped a tiny amount in h by h then of course a times b is unaffected and now c becomes slightly bit higher what does that do to the function it makes it slightly bit higher because we're simply adding c and it makes it slightly bit higher by the exact same amount that we added to c and so that tells you that the slope is one that will be the the rate at which d will increase as we scale c okay so we now have some intuitive sense of what this derivative is telling you about the function and we'd like to move to neural networks now as i mentioned neural networks will be pretty massive expressions mathematical expressions so we need some data structures that maintain these expressions and that's what we're going to start to build out now so we're going to build out this value object that i showed you in the readme page of micrograd so let me copy paste a skeleton of the first very simple value object so class value takes a single scalar value that it wraps and keeps track of and that's it so we can for example do value of 2.0 and then we can get we can look at its content and python will internally use the wrapper function to return this string like that so this is a value object that we're going to call value object", metadata={'source': 'example_data/Lecture_1_0.mp3'})]
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
f8c90ea26461-12
previous Airtable next CoNLL-U By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/audio.html
3e1829265bad-0
.ipynb .pdf Psychic Contents Prerequisites Loading documents Converting the docs to embeddings Psychic# This notebook covers how to load documents from Psychic. See here for more details. Prerequisites# Follow the Quick Start section in this document Log into the Psychic dashboard and get your secret key Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify. Loading documents# Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library). # Uncomment this to install psychicapi if you don't already have it installed !poetry run pip -q install psychicapi [notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pip from langchain.document_loaders import PsychicLoader from psychicapi import ConnectorId # Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value # This loader uses our test credentials google_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test" ) documents = google_drive_loader.load() Converting the docs to embeddings# We can now convert these documents into embeddings and store them in a vector database like Chroma from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/psychic.html
3e1829265bad-1
from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import RetrievalQAWithSourcesChain text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()) chain({"question": "what is psychic?"}, return_only_outputs=True) previous Obsidian next PySpark DataFrame Loader Contents Prerequisites Loading documents Converting the docs to embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/psychic.html
13531034bf37-0
.ipynb .pdf Reddit Reddit# Reddit is an American social news aggregation, content rating, and discussion website. This loader fetches the text from the Posts of Subreddits or Reddit users, using the praw Python package. Make a Reddit Application and initialize the loader with with your Reddit API credentials. from langchain.document_loaders import RedditPostsLoader # !pip install praw # load using 'subreddit' mode loader = RedditPostsLoader( client_id="YOUR CLIENT ID", client_secret="YOUR CLIENT SECRET", user_agent="extractor by u/Master_Ocelot8179", categories=['new', 'hot'], # List of categories to load posts from mode = 'subreddit', search_queries=['investing', 'wallstreetbets'], # List of subreddits to load posts from number_posts=20 # Default value is 10 ) # # or load using 'username' mode # loader = RedditPostsLoader( # client_id="YOUR CLIENT ID", # client_secret="YOUR CLIENT SECRET", # user_agent="extractor by u/Master_Ocelot8179", # categories=['new', 'hot'], # mode = 'username', # search_queries=['ga3far', 'Master_Ocelot8179'], # List of usernames to load posts from # number_posts=20 # ) # Note: Categories can be only of following value - "controversial" "hot" "new" "rising" "top" documents = loader.load() documents[:5]
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/reddit.html
13531034bf37-1
documents = loader.load() documents[:5] [Document(page_content='Hello, I am not looking for investment advice. I will apply my own due diligence. However, I am interested if anyone knows as a UK resident how fees and exchange rate differences would impact performance?\n\nI am planning to create a pie of index funds (perhaps UK, US, europe) or find a fund with a good track record of long term growth at low rates. \n\nDoes anyone have any ideas?', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Long term retirement funds fees/exchange rate query', 'post_score': 1, 'post_id': '130pa6m', 'post_url': 'https://www.reddit.com/r/investing/comments/130pa6m/long_term_retirement_funds_feesexchange_rate_query/', 'post_author': Redditor(name='Badmanshiz')}), Document(page_content='I much prefer the Roth IRA and would rather rollover my 401k to that every year instead of keeping it in the limited 401k options. But if I rollover, will I be able to continue contributing to my 401k? Or will that close my account? I realize that there are tax implications of doing this but I still think it is the better option.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Is it possible to rollover my 401k every year?', 'post_score': 3, 'post_id': '130ja0h', 'post_url': 'https://www.reddit.com/r/investing/comments/130ja0h/is_it_possible_to_rollover_my_401k_every_year/', 'post_author': Redditor(name='AnCap_Catholic')}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/reddit.html
13531034bf37-2
Document(page_content='Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn\'t warrant a self post? Feel free to post here! \n\nIf your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:\n\n* How old are you? What country do you live in? \n* Are you employed/making income? How much? \n* What are your objectives with this money? (Buy a house? Retirement savings?) \n* What is your time horizon? Do you need this money next month? Next 20yrs? \n* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?) \n* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?) \n* Any big debts (include interest rate) or expenses? \n* And any other relevant financial information will be useful to give you a proper answer. \n\nPlease consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq\nAnd our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources. \n\nIf you are new to investing - please refer to Wiki - [Getting Started](https://www.reddit.com/r/investing/wiki/index/gettingstarted/)\n\nThe reading list in the wiki has a list of books ranging from light reading to advanced topics depending on your knowledge level. Link here - [Reading List](https://www.reddit.com/r/investing/wiki/readinglist)\n\nCheck the resources in the sidebar.\n\nBe aware that these answers
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/reddit.html
13531034bf37-3
the resources in the sidebar.\n\nBe aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered investment adviser if you need professional support before making any financial decisions!', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Daily General Discussion and Advice Thread - April 27, 2023', 'post_score': 5, 'post_id': '130eszz', 'post_url': 'https://www.reddit.com/r/investing/comments/130eszz/daily_general_discussion_and_advice_thread_april/', 'post_author': Redditor(name='AutoModerator')}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/reddit.html
13531034bf37-4
Document(page_content="Based on recent news about salt battery advancements and the overall issues of lithium, I was wondering what would be feasible ways to invest into non-lithium based battery technologies? CATL is of course a choice, but the selection of brokers I currently have in my disposal don't provide HK stocks at all.", metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Investing in non-lithium battery technologies?', 'post_score': 2, 'post_id': '130d6qp', 'post_url': 'https://www.reddit.com/r/investing/comments/130d6qp/investing_in_nonlithium_battery_technologies/', 'post_author': Redditor(name='-manabreak')}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/reddit.html
13531034bf37-5
Document(page_content='Hello everyone,\n\nI would really like to invest in an ETF that follows spy or another big index, as I think this form of investment suits me best. \n\nThe problem is, that I live in Denmark where ETFs and funds are taxed annually on unrealised gains at quite a steep rate. This means that an ETF growing say 10% per year will only grow about 6%, which really ruins the long term effects of compounding interest.\n\nHowever stocks are only taxed on realised gains which is why they look more interesting to hold long term.\n\nI do not like the lack of diversification this brings, as I am looking to spend tonnes of time picking the right long term stocks.\n\nIt would be ideal to find a few stocks that over the long term somewhat follows the indexes. Does anyone have suggestions?\n\nI have looked at Nasdaq Inc. which quite closely follows Nasdaq 100. \n\nI really appreciate any help.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Stocks that track an index', 'post_score': 7, 'post_id': '130auvj', 'post_url': 'https://www.reddit.com/r/investing/comments/130auvj/stocks_that_track_an_index/', 'post_author': Redditor(name='LeAlbertP')})] previous ReadTheDocs Documentation next Roam By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/reddit.html
6c094a724903-0
.ipynb .pdf Iugu Iugu# Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications. This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization. import os from langchain.document_loaders import IuguLoader from langchain.indexes import VectorstoreIndexCreator The Iugu API requires an access token, which can be found inside of the Iugu dashboard. This document loader also requires a resource option which defines what data you want to load. Following resources are available: Documentation Documentation iugu_loader = IuguLoader("charges") # Create a vectorstore retriver from the loader # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([iugu_loader]) iugu_doc_retriever = index.vectorstore.as_retriever() previous Image captions next Joplin By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/iugu.html
1704a4f58d6d-0
.ipynb .pdf CoNLL-U CoNLL-U# CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below. Blank lines marking sentence boundaries. Comment lines starting with hash (#). This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples. from langchain.document_loaders import CoNLLULoader loader = CoNLLULoader("example_data/conllu.conllu") document = loader.load() document [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})] previous OpenAIWhisperParser next Copy Paste By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/conll-u.html
edb5797ac282-0
.ipynb .pdf File Directory Contents Show a progress bar Use multithreading Change loader class Auto detect file encodings with TextLoader A. Default Behavior B. Silent fail C. Auto detect encodings File Directory# This covers how to use the DirectoryLoader to load all documents in a directory. Under the hood, by default this uses the UnstructuredLoader from langchain.document_loaders import DirectoryLoader We can use the glob parameter to control which files to load. Note that here it doesn’t load the .rst file or the .ipynb files. loader = DirectoryLoader('../', glob="**/*.md") docs = loader.load() len(docs) 1 Show a progress bar# By default a progress bar will not be shown. To show a progress bar, install the tqdm library (e.g. pip install tqdm), and set the show_progress parameter to True. %pip install tqdm loader = DirectoryLoader('../', glob="**/*.md", show_progress=True) docs = loader.load() Requirement already satisfied: tqdm in /Users/jon/.pyenv/versions/3.9.16/envs/microbiome-app/lib/python3.9/site-packages (4.65.0) 0it [00:00, ?it/s] Use multithreading# By default the loading happens in one thread. In order to utilize several threads set the use_multithreading flag to true. loader = DirectoryLoader('../', glob="**/*.md", use_multithreading=True) docs = loader.load() Change loader class# By default this uses the UnstructuredLoader class. However, you can change up the type of loader pretty easily. from langchain.document_loaders import TextLoader loader = DirectoryLoader('../', glob="**/*.md", loader_cls=TextLoader) docs = loader.load()
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/file_directory.html
edb5797ac282-1
docs = loader.load() len(docs) 1 If you need to load Python source code files, use the PythonLoader. from langchain.document_loaders import PythonLoader loader = DirectoryLoader('../../../../../', glob="**/*.py", loader_cls=PythonLoader) docs = loader.load() len(docs) 691 Auto detect file encodings with TextLoader# In this example we will see some strategies that can be useful when loading a big list of arbitrary files from a directory using the TextLoader class. First to illustrate the problem, let’s try to load multiple text with arbitrary encodings. path = '../../../../../tests/integration_tests/examples' loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader) A. Default Behavior# loader.load() ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /data/source/langchain/langchain/document_loaders/text.py:29 in load │ │ │ │ 26 │ │ text = "" │ │ 27 │ │ with open(self.file_path, encoding=self.encoding) as f: │ │ 28 │ │ │ try: │ │ ❱ 29 │ │ │ │ text = f.read() │ │ 30 │ │ │ except UnicodeDecodeError as e: │ │ 31 │ │ │ │ if self.autodetect_encoding: │ │ 32 │ │ │ │ │ detected_encodings = self.detect_file_encodings() │ │ │ │ /home/spike/.pyenv/versions/3.9.11/lib/python3.9/codecs.py:322 in decode │
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/file_directory.html
edb5797ac282-2
│ │ │ 319 │ def decode(self, input, final=False): │ │ 320 │ │ # decode input (taking the buffer into account) │ │ 321 │ │ data = self.buffer + input │ │ ❱ 322 │ │ (result, consumed) = self._buffer_decode(data, self.errors, final) │ │ 323 │ │ # keep undecoded input until the next call │ │ 324 │ │ self.buffer = data[consumed:] │ │ 325 │ │ return result │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ UnicodeDecodeError: 'utf-8' codec can't decode byte 0xca in position 0: invalid continuation byte The above exception was the direct cause of the following exception: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in <module>:1 │ │ │ │ ❱ 1 loader.load() │ │ 2 │ │ │ │ /data/source/langchain/langchain/document_loaders/directory.py:84 in load │ │ │ │ 81 │ │ │ │ │ │ if self.silent_errors: │ │ 82 │ │ │ │ │ │ │ logger.warning(e) │ │ 83 │ │ │ │ │ │ else: │ │ ❱ 84 │ │ │ │ │ │ │ raise e │
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/file_directory.html
edb5797ac282-3
│ 85 │ │ │ │ │ finally: │ │ 86 │ │ │ │ │ │ if pbar: │ │ 87 │ │ │ │ │ │ │ pbar.update(1) │ │ │ │ /data/source/langchain/langchain/document_loaders/directory.py:78 in load │ │ │ │ 75 │ │ │ if i.is_file(): │ │ 76 │ │ │ │ if _is_visible(i.relative_to(p)) or self.load_hidden: │ │ 77 │ │ │ │ │ try: │ │ ❱ 78 │ │ │ │ │ │ sub_docs = self.loader_cls(str(i), **self.loader_kwargs).load() │ │ 79 │ │ │ │ │ │ docs.extend(sub_docs) │ │ 80 │ │ │ │ │ except Exception as e: │ │ 81 │ │ │ │ │ │ if self.silent_errors: │ │ │ │ /data/source/langchain/langchain/document_loaders/text.py:44 in load │ │ │ │ 41 │ │ │ │ │ │ except UnicodeDecodeError: │ │ 42 │ │ │ │ │ │ │ continue │ │ 43 │ │ │ │ else: │ │ ❱ 44 │ │ │ │ │ raise RuntimeError(f"Error loading {self.file_path}") from e │
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/file_directory.html
edb5797ac282-4
│ 45 │ │ │ except Exception as e: │ │ 46 │ │ │ │ raise RuntimeError(f"Error loading {self.file_path}") from e │ │ 47 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Error loading ../../../../../tests/integration_tests/examples/example-non-utf8.txt The file example-non-utf8.txt uses a different encoding the load() function fails with a helpful message indicating which file failed decoding. With the default behavior of TextLoader any failure to load any of the documents will fail the whole loading process and no documents are loaded. B. Silent fail# We can pass the parameter silent_errors to the DirectoryLoader to skip the files which could not be loaded and continue the load process. loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, silent_errors=True) docs = loader.load() Error loading ../../../../../tests/integration_tests/examples/example-non-utf8.txt doc_sources = [doc.metadata['source'] for doc in docs] doc_sources ['../../../../../tests/integration_tests/examples/whatsapp_chat.txt', '../../../../../tests/integration_tests/examples/example-utf8.txt'] C. Auto detect encodings# We can also ask TextLoader to auto detect the file encoding before failing, by passing the autodetect_encoding to the loader class. text_loader_kwargs={'autodetect_encoding': True} loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs) docs = loader.load() doc_sources = [doc.metadata['source'] for doc in docs] doc_sources ['../../../../../tests/integration_tests/examples/example-non-utf8.txt', '../../../../../tests/integration_tests/examples/whatsapp_chat.txt',
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/file_directory.html
edb5797ac282-5
'../../../../../tests/integration_tests/examples/whatsapp_chat.txt', '../../../../../tests/integration_tests/examples/example-utf8.txt'] previous Facebook Chat next HTML Contents Show a progress bar Use multithreading Change loader class Auto detect file encodings with TextLoader A. Default Behavior B. Silent fail C. Auto detect encodings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/file_directory.html
450a5ce831b8-0
.ipynb .pdf Facebook Chat Facebook Chat# Messenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010. This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain. #pip install pandas from langchain.document_loaders import FacebookChatLoader loader = FacebookChatLoader("example_data/facebook_chat.json") loader.load()
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/facebook_chat.html
450a5ce831b8-1
loader = FacebookChatLoader("example_data/facebook_chat.json") loader.load() [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})] previous Microsoft Excel next File Directory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/facebook_chat.html
ca55632ffaf3-0
.ipynb .pdf 2Markdown 2Markdown# 2markdown service transforms website content into structured markdown files. # You will need to get your own API key. See https://2markdown.com/login api_key = "" from langchain.document_loaders import ToMarkdownLoader loader = ToMarkdownLoader.from_api_key(url="https://python.langchain.com/en/latest/", api_key=api_key) docs = loader.load() print(docs[0].page_content) ## Contents - [Getting Started](#getting-started) - [Modules](#modules) - [Use Cases](#use-cases) - [Reference Docs](#reference-docs) - [LangChain Ecosystem](#langchain-ecosystem) - [Additional Resources](#additional-resources) ## Welcome to LangChain [\#](\#welcome-to-langchain "Permalink to this headline") **LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be: 1. _Data-aware_: connect a language model to other sources of data 2. _Agentic_: allow a language model to interact with its environment The LangChain framework is designed around these principles. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/). ## Getting Started [\#](\#getting-started "Permalink to this headline") How to get started using LangChain to create an Language Model application. - [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html) Concepts and terminology.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/tomarkdown.html
ca55632ffaf3-1
Concepts and terminology. - [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html) Tutorials created by community experts and presented on YouTube. - [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html) ## Modules [\#](\#modules "Permalink to this headline") These modules are the core abstractions which we view as the building blocks of any LLM-powered application. For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use. The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides. The modules are (from least to most complex): - [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations. - [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization. - [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent. - [Indexes](https://python.langchain.com/en/latest/modules/indexes.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data. - [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility). - [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/tomarkdown.html
ca55632ffaf3-2
- [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application. ## Use Cases [\#](\#use-cases "Permalink to this headline") Best practices and built-in implementations for common LangChain use cases: - [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI. - [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities. - [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data. - [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. - [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them. - [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/tomarkdown.html
ca55632ffaf3-3
- [Code Understanding](https://python.langchain.com/en/latest/use_cases/code.html): Recommended reading if you want to use language models to analyze code. - [Interacting with APIs](https://python.langchain.com/en/latest/use_cases/apis.html): Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions. - [Extraction](https://python.langchain.com/en/latest/use_cases/extraction.html): Extract structured information from text. - [Summarization](https://python.langchain.com/en/latest/use_cases/summarization.html): Compressing longer documents. A type of Data-Augmented Generation. - [Evaluation](https://python.langchain.com/en/latest/use_cases/evaluation.html): Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation. ## Reference Docs [\#](\#reference-docs "Permalink to this headline") Full documentation on all methods, classes, installation methods, and integration setups for LangChain. - [Reference Documentation](https://python.langchain.com/en/latest/reference.html) ## LangChain Ecosystem [\#](\#langchain-ecosystem "Permalink to this headline") Guides for how other companies/products can be used with LangChain. - [LangChain Ecosystem](https://python.langchain.com/en/latest/ecosystem.html) ## Additional Resources [\#](\#additional-resources "Permalink to this headline") Additional resources we think may be useful as you develop your application! - [LangChainHub](https://github.com/hwchase17/langchain-hub): The LangChainHub is a place to share and explore other prompts, chains, and agents.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/tomarkdown.html
ca55632ffaf3-4
- [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications. - [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, and template repositories for deploying LangChain apps. - [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents. - [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. - [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain! - [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos. - [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel. previous Stripe next Twitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/tomarkdown.html
fe2eb3db5047-0
.ipynb .pdf Markdown Contents Retain Elements Markdown# Markdown is a lightweight markup language for creating formatted text using a plain-text editor. This covers how to load markdown documents into a document format that we can use downstream. # !pip install unstructured > /dev/null from langchain.document_loaders import UnstructuredMarkdownLoader markdown_path = "../../../../../README.md" loader = UnstructuredMarkdownLoader(markdown_path) data = loader.load() data
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/markdown.html
fe2eb3db5047-1
[Document(page_content="ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain\n\nâ\x9a¡ Building applications with LLMs through composability â\x9a¡\n\nLooking for the JS/TS version? Check out LangChain.js.\n\nProduction Support: As you move your LangChains into production, we'd love to offer more comprehensive support.\nPlease fill out this form and we'll set up a dedicated support Slack channel.\n\nQuick Install\n\npip install langchain\nor\nconda install langchain -c conda-forge\n\nð\x9f¤” What is this?\n\nLarge language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\n\nThis library aims to assist in the development of those types of applications. Common examples of these applications include:\n\nâ\x9d“ Question Answering over specific documents\n\nDocumentation\n\nEnd-to-end Example: Question Answering over Notion Database\n\nð\x9f’¬ Chatbots\n\nDocumentation\n\nEnd-to-end Example: Chat-LangChain\n\nð\x9f¤\x96 Agents\n\nDocumentation\n\nEnd-to-end Example: GPT+WolframAlpha\n\nð\x9f“\x96 Documentation\n\nPlease see here for full documentation on:\n\nGetting started (installation, setting up the environment, simple examples)\n\nHow-To examples (demos, integrations, helper functions)\n\nReference (full API docs)\n\nResources (high-level explanation of core concepts)\n\nð\x9f\x9a\x80 What can this help
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/markdown.html
fe2eb3db5047-2
explanation of core concepts)\n\nð\x9f\x9a\x80 What can this help with?\n\nThere are six main areas that LangChain is designed to help with.\nThese are, in increasing order of complexity:\n\nð\x9f“\x83 LLMs and Prompts:\n\nThis includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.\n\nð\x9f”\x97 Chains:\n\nChains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n\nð\x9f“\x9a Data Augmented Generation:\n\nData Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.\n\nð\x9f¤\x96 Agents:\n\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.\n\nð\x9f§\xa0 Memory:\n\nMemory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\n\nð\x9f§\x90 Evaluation:\n\n[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/markdown.html
fe2eb3db5047-3
is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\nFor more information on these concepts, please see our full documentation.\n\nð\x9f’\x81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see here.", metadata={'source': '../../../../../README.md'})]
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/markdown.html
fe2eb3db5047-4
Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredMarkdownLoader(markdown_path, mode="elements") data = loader.load() data[0] Document(page_content='ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain', metadata={'source': '../../../../../README.md', 'page_number': 1, 'category': 'Title'}) previous JSON next Microsoft PowerPoint Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/markdown.html
c613e8fb8c74-0
.ipynb .pdf Image captions Contents Prepare a list of image urls from Wikimedia Create the loader Create the index Query Image captions# By default, the loader utilizes the pre-trained Salesforce BLIP image captioning model. This notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions #!pip install transformers from langchain.document_loaders import ImageCaptionLoader Prepare a list of image urls from Wikimedia# list_image_urls = [ 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg',
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/image_captions.html
c613e8fb8c74-1
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg', ] Create the loader# loader = ImageCaptionLoader(path_images=list_image_urls) list_docs = loader.load() list_docs /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn(
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/image_captions.html
c613e8fb8c74-2
warnings.warn( [Document(page_content='an image of a frog on a flower [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg'}), Document(page_content='an image of a shark swimming in the ocean [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg'}), Document(page_content='an image of a painting of a battle scene [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg'}), Document(page_content='an image of a passion fruit and a half cut passion [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg'}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/image_captions.html
c613e8fb8c74-3
Document(page_content='an image of the spiral galaxy [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg'}), Document(page_content='an image of a man on skis in the snow [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg'}), Document(page_content='an image of a flower in the dark [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg'})] from PIL import Image import requests Image.open(requests.get(list_image_urls[0], stream=True).raw).convert('RGB') Create the index# from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader])
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/image_captions.html
c613e8fb8c74-4
index = VectorstoreIndexCreator().from_loaders([loader]) /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( Using embedded DuckDB without persistence: data will be transient Query# query = "What's the painting about?" index.query(query) ' The painting is about a battle scene.' query = "What kind of images are there?" index.query(query) ' There are images of a spiral galaxy, a painting of a battle scene, a flower in the dark, and a frog on a flower.' previous Google Drive next Iugu Contents Prepare a list of image urls from Wikimedia Create the loader Create the index Query By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/image_captions.html
5ecfac98fee7-0
.ipynb .pdf HTML Contents Loading HTML with BeautifulSoup4 HTML# The HyperText Markup Language or HTML is the standard markup language for documents designed to be displayed in a web browser. This covers how to load HTML documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredHTMLLoader loader = UnstructuredHTMLLoader("example_data/fake-content.html") data = loader.load() data [Document(page_content='My First Heading\n\nMy first paragraph.', lookup_str='', metadata={'source': 'example_data/fake-content.html'}, lookup_index=0)] Loading HTML with BeautifulSoup4# We can also use BeautifulSoup4 to load HTML documents using the BSHTMLLoader. This will extract the text from the HTML into page_content, and the page title as title into metadata. from langchain.document_loaders import BSHTMLLoader loader = BSHTMLLoader("example_data/fake-content.html") data = loader.load() data [Document(page_content='\n\nTest Title\n\n\nMy First Heading\nMy first paragraph.\n\n\n', metadata={'source': 'example_data/fake-content.html', 'title': 'Test Title'})] previous File Directory next Images Contents Loading HTML with BeautifulSoup4 By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/html.html
af7dc672a177-0
.ipynb .pdf Copy Paste Contents Metadata Copy Paste# This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don’t even need to use a DocumentLoader, but rather can just construct the Document directly. from langchain.docstore.document import Document text = "..... put the text you copy pasted here......" doc = Document(page_content=text) Metadata# If you want to add metadata about the where you got this piece of text, you easily can with the metadata key. metadata = {"source": "internet", "date": "Friday"} doc = Document(page_content=text, metadata=metadata) previous CoNLL-U next CSV Contents Metadata By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/copypaste.html
8d1aa5f48a16-0
.ipynb .pdf Pandas DataFrame Pandas DataFrame# This notebook goes over how to load data from a pandas DataFrame. #!pip install pandas import pandas as pd df = pd.read_csv('example_data/mlb_teams_2012.csv') df.head() Team "Payroll (millions)" "Wins" 0 Nationals 81.34 98 1 Reds 82.20 97 2 Yankees 197.96 95 3 Giants 117.62 94 4 Braves 83.31 94 from langchain.document_loaders import DataFrameLoader loader = DataFrameLoader(df, page_content_column="Team") loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/pandas_dataframe.html
8d1aa5f48a16-1
Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/pandas_dataframe.html
8d1aa5f48a16-2
Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] previous Open Document Format (ODT) next PDF By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/pandas_dataframe.html
59105bb7c7a2-0
.ipynb .pdf Microsoft OneDrive Contents Prerequisites 🧑 Instructions for ingesting your documents from OneDrive 🔑 Authentication 🗂️ Documents loader 📑 Loading documents from a OneDrive Directory 📑 Loading documents from a list of Documents IDs Microsoft OneDrive# Microsoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft. This notebook covers how to load documents from OneDrive. Currently, only docx, doc, and pdf files are supported. Prerequisites# Register an application with the Microsoft identity platform instructions. When registration finishes, the Azure portal displays the app registration’s Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform. During the steps you will be following at item 1, you can set the redirect URI as http://localhost:8000/callback During the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section. Follow the instructions at this document to add the following SCOPES (offline_access and Files.Read.All) to your application. Visit the Graph Explorer Playground to obtain your OneDrive ID. The first step is to ensure you are logged in with the account associated your OneDrive account. Then you need to make a request to https://graph.microsoft.com/v1.0/me/drive and the response will return a payload with a field id that holds the ID of your OneDrive account. You need to install the o365 package using the command pip install o365. At the end of the steps you must have the following values: CLIENT_ID CLIENT_SECRET DRIVE_ID 🧑 Instructions for ingesting your documents from OneDrive# 🔑 Authentication#
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/microsoft_onedrive.html
59105bb7c7a2-1
🧑 Instructions for ingesting your documents from OneDrive# 🔑 Authentication# By default, the OneDriveLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script. os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID" os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET" This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful. from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID") Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader. from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", auth_with_token=True) 🗂️ Documents loader# 📑 Loading documents from a OneDrive Directory# OneDriveLoader can load documents from a specific folder within your OneDrive. For instance, you want to load all documents that are stored at Documents/clients folder within your OneDrive.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/microsoft_onedrive.html
59105bb7c7a2-2
from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", folder_path="Documents/clients", auth_with_token=True) documents = loader.load() 📑 Loading documents from a list of Documents IDs# Another possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID. For instance, to retrieve information about all objects that are stored at the root of the Documents folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/{YOUR DRIVE ID}/root/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters. from langchain.document_loaders.onedrive import OneDriveLoader loader = OneDriveLoader(drive_id="YOUR DRIVE ID", object_ids=["ID_1", "ID_2"], auth_with_token=True) documents = loader.load() previous Joplin next Modern Treasury Contents Prerequisites 🧑 Instructions for ingesting your documents from OneDrive 🔑 Authentication 🗂️ Documents loader 📑 Loading documents from a OneDrive Directory 📑 Loading documents from a list of Documents IDs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/microsoft_onedrive.html
e4b5eeae0f29-0
.ipynb .pdf MediaWikiDump MediaWikiDump# MediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc. This covers how to load a MediaWiki XML dump file into a document format that we can use downstream. It uses mwxml from mediawiki-utilities to dump and mwparserfromhell from earwig to parse MediaWiki wikicode. Dump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki. #mediawiki-utilities supports XML schema 0.11 in unmerged branches !pip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11 #mediawiki-utilities mwxml has a bug, fix PR pending !pip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11 !pip install -qU mwparserfromhell from langchain.document_loaders import MWDumpLoader loader = MWDumpLoader("example_data/testmw_pages_current.xml", encoding="utf8") documents = loader.load() print (f'You have {len(documents)} document(s) in your data ') You have 177 document(s) in your data documents[:5] [Document(page_content='\t\n\t\n\tArtist\n\tReleased\n\tRecorded\n\tLength\n\tLabel\n\tProducer', metadata={'source': 'Album'}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/mediawikidump.html
e4b5eeae0f29-1
Document(page_content='{| class="article-table plainlinks" style="width:100%;"\n|- style="font-size:18px;"\n! style="padding:0px;" | Template documentation\n|-\n| Note: portions of the template sample may not be visible without values provided.\n|-\n| View or edit this documentation. (About template documentation)\n|-\n| Editors can experiment in this template\'s [ sandbox] and [ test case] pages.\n|}Category:Documentation templates', metadata={'source': 'Documentation'}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/mediawikidump.html
e4b5eeae0f29-2
Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd <noinclude></noinclude> at the end of the template page.\n\nAdd <noinclude></noinclude> to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format when used:\n\nTEMPLATE CODE\n<includeonly>Any categories to be inserted into articles by the template</includeonly>\n<noinclude>{{Documentation}}</noinclude>\n\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\n\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template "running into" previous code.\n\nOn the documentation page\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\n\nNormally, you will want to write something like the following on the documentation page:\n\n==Description==\nThis template is used to do something.\n\n==Syntax==\nType <code>{{t|templatename}}</code> somewhere.\n\n==Samples==\n<code><nowiki>{{templatename|input}}</nowiki></code> \n\nresults in...\n\n{{templatename|input}}\n\n<includeonly>Any categories for the template itself</includeonly>\n<noinclude>[[Category:Template documentation]]</noinclude>\n\nUse any or all of the above description/syntax/sample output sections. You may also want to add "see also" or other sections.\n\nNote that the above example also uses the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source':
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/mediawikidump.html
e4b5eeae0f29-3
the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}),
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/mediawikidump.html
e4b5eeae0f29-4
Document(page_content='Description\nA template link with a variable number of parameters (0-20).\n\nSyntax\n \n\nSource\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\n\nExample\n\nCategory:General wiki templates\nCategory:Template documentation', metadata={'source': 'T/doc'}), Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n Appears in\n Debut\n ', metadata={'source': 'Character'})] previous IMSDb next Wikipedia By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/mediawikidump.html
38b10c4eb916-0
.ipynb .pdf iFixit Contents Searching iFixit using /suggest iFixit# iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0. This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs. It’s incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit. from langchain.document_loaders import IFixitLoader loader = IFixitLoader("https://www.ifixit.com/Teardown/Banana+Teardown/811") data = loader.load() data
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-1
data = loader.load() data [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)] loader = IFixitLoader("https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself") data = loader.load() data
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-2
[Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-3
reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-4
a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-5
Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article.
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-6
on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-7
below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-8
I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-9
self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)]
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html
38b10c4eb916-10
loader = IFixitLoader("https://www.ifixit.com/Device/Standard_iPad") data = loader.load() data [Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)] Searching iFixit using /suggest# If you’re looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents. data = IFixitLoader.load_suggestions("Banana") data
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/document_loaders/examples/ifixit.html