diff --git "a/queries.jsonl" "b/queries.jsonl" new file mode 100644--- /dev/null +++ "b/queries.jsonl" @@ -0,0 +1,672 @@ +{"id": "000000", "text": "Here is my code :\n@Component({\n template: `\n resolveData: {{resolveA}}
\n data : {{ dataA }}\n `,\n})\nclass MyComponent {\n @Input() resolveA?: string;\n @Input() dataA?: string;\n}\n\n@Component({\n selector: 'my-app',\n standalone: true,\n imports: [CommonModule, RouterModule],\n template: `\n

Hello from {{name}}!

\n\n \n `,\n})\nexport class App {\n name = 'Angular';\n}\n\nbootstrapApplication(App, {\n providers: [\n provideRouter(\n [\n {\n path: '**',\n component: MyComponent,\n data: { dataA: 'My static data' },\n resolve: { resolveA: () => 'My resolved data' },\n },\n ],\n ),\n ],\n});\n\nMyComponent should display both the static and the resolved data.\nAny idea why ?"} +{"id": "000001", "text": "According to RFC3 signal-based components with change detection strategy based fully on signals are planned as next thing to be released. So as of now, with zone-based change detection strategy, is there any sense of using signals over the traditional way of setting values to class' properties? Will signals' dependency tree eg. gain performance in zone-based components?"} +{"id": "000002", "text": "I have the following simple code on my component:\nimport {Component, effect, signal, WritableSignal} from '@angular/core';\nimport {AppService} from \"../app.service\";\nimport {toSignal} from \"@angular/core/rxjs-interop\";\n\n@Component({\n selector: 'app-parent',\n templateUrl: './parent.component.html',\n styleUrls: ['./parent.component.css']\n})\nexport class ParentComponent {\n\n translations: WritableSignal<{data: {}}> = signal({data: []});\n\n constructor( private appService: AppService) {\n this.translations = toSignal(this.appService.getTranslations());\n effect(() => {\n console.log('translation API:', this.translations());\n });\n }\n\n changeValue(): void {\n this.translations.set({data: {hello: 'hallo'}})\n\n }\n}\n\nFYI: this.appService.getTranslations() returns an observable\nI'm trying out the new features released with Angular v16, and how to convert Observables to Signals.\nWhat I wanted to do on the above code is, I change the value of the WritableSignal Object and log its value on change.\nI'm getting the following error:\nTS2739: Type 'Signal ' is missing the following properties from type 'WritableSignal{ data: {}; }>': set, update, mutate, asReadonly\n\nHelp please."} +{"id": "000003", "text": "Angular 16 is recently released and I have created a new standalone project without any module.\nthen in a standalone component I need to import BrowserAnimationsModule from angular/platform-browser/animations. but when I import it, this error occures:\n\nPoviders from the BrowserModule have already been loaded. If you\nneed access to common directives such as NgIf and NgFor, import the\nCommonModule instead.\n\nand when I remove it this one:\n\nUnexpected synthetic listener @animation.start found. Please make sure\nthat: Either BrowserAnimationsModule or NoopAnimationsModule are\nimported in your application.\n\nso why first error occures? where is BrowserModule already loaded? and if it has already been imported how do I use it?"} +{"id": "000004", "text": "Before signals, I had an observable that I would watch to trigger a FormControl's editable property, like this:\nthis.#isItEditor$\n .pipe(takeUntilDestroyed(this.#destroyRef))\n .subscribe(x => {\n const funded = this.formGroup.controls.funded\n if (x)\n funded.enable()\n else\n funded.disable()\n })\n\nNow I've changed from an observable to a signal, but it feels like, in this case, I still need to create an observable from the signal to then do the pipe/subscribe the same way I used to.\nI'm not assigning anything based on the signal changing, I'm just implementing a side effect. Is this correct?"} +{"id": "000005", "text": "Example from (https://indepth.dev/posts/1518/takeuntildestroy-in-angular-v16)\nThis works for one subscribe method but doesn't work for two methods\nIf you look at the following code, then when the component is destroyed, the second subscription will exist. I just can't understand why and how to make the code work for any number of subscriptions in the component? Perhaps I misunderstood something?\nimport { takeUntilDestroyed } from '@angular/core/rxjs-interop'\n\n constructor(\n ) {\n interval(1000).pipe(\n takeUntilDestroyed(),\n ).subscribe(console.log)\n\n interval(1000).pipe(\n takeUntilDestroyed(),\n ).subscribe(console.log)\n }"} +{"id": "000006", "text": "I am testing angular 16 signals and per my understanding, when I disable zone.js and call signal.update() the view should be updated with new value. It is not. Please help me to understand why.\nmain.ts\nplatformBrowserDynamic().bootstrapModule(AppModule, { ngZone: 'noop' })\n .catch(err => console.error(err));\n\napp.component.ts\n@Component({\n selector: 'app-root',\n template: '\n

{{ title() }}

\n \n ',\n})\nexport class AppComponent {\n title = signal('Hello');\n\n click(): void {\n this.title.update((value) => value + \"!!\");\n }\n}\n\nI am expecting that after button click, value of 'title' will be updated from 'Hello' to 'Hello!!'. It is not updated."} +{"id": "000007", "text": "Code - https://github.com/suyashjawale/Angular16\nI have generated my Angular 16 project using following command and selected routing to yes.\nng new myapp --standalone\n\nAnd then I generated other components using\nng g c components/home\n\nSince, i used --standalone the boilerplate files are different. (Eg. New file app.routes.ts)\n File Structure\nNow I want to implement routing So I added the following code to app.routes.ts.\n app.routes.ts\n app.component.html\nBut the routing doesn't happen. Idk why?. I have restarted the app. still it doesn't work.\nSo i implemeted loadComponent. But still it doesn't work. Code below.\n loadComponent way.\nAm i doing anything wrong. It works with angular 15. But it has app.routing.module.ts. I have restarted the app but still it doesn't work.\nFYI - component is standalone\n\nhome.component.ts"} +{"id": "000008", "text": "I am using Angular 16.0.0 and with Angular Universal server side rendering, but when I\nImport BrowserModule.withServerTransition in my app module its marked as deprecated, what is the replacement for it ?\n\nmy app.module.ts\nimport {BrowserModule} from '@angular/platform-browser';\nimport {NgModule} from '@angular/core';\n\nimport {AppRoutingModule} from './app-routing.module';\nimport {AppComponent} from './app.component';\nimport {BrowserAnimationsModule} from \"@angular/platform-browser/animations\";\nimport {MatMenuModule} from '@angular/material/menu';\nimport {MatButtonModule} from '@angular/material/button'\nimport {MatIconModule} from '@angular/material/icon';\nimport {MatCardModule} from '@angular/material/card';\nimport { HomeComponent } from './home/home.component';\nimport {MatTabsModule} from '@angular/material/tabs';\nimport { CoursesCardListComponent } from './courses-card-list/courses-card-list.component';\nimport {CourseComponent} from \"./course/course.component\";\nimport { MatDatepickerModule } from \"@angular/material/datepicker\";\nimport { MatDialogModule } from \"@angular/material/dialog\";\nimport { MatInputModule } from \"@angular/material/input\";\nimport { MatListModule } from \"@angular/material/list\";\nimport { MatPaginatorModule } from \"@angular/material/paginator\";\nimport { MatProgressSpinnerModule } from \"@angular/material/progress-spinner\";\nimport { MatSelectModule } from \"@angular/material/select\";\nimport { MatSidenavModule } from \"@angular/material/sidenav\";\nimport { MatSortModule } from \"@angular/material/sort\";\nimport { MatTableModule } from \"@angular/material/table\";\nimport { MatToolbarModule } from \"@angular/material/toolbar\";\nimport {CoursesService} from \"./services/courses.service\";\nimport {CourseResolver} from \"./services/course.resolver\";\nimport { CourseDialogComponent } from './course-dialog/course-dialog.component';\nimport {ReactiveFormsModule} from \"@angular/forms\";\nimport { HttpClientModule} from '@angular/common/http';\nimport {AboutComponent} from './about/about.component';\n\n\n@NgModule({\n declarations: [\n AppComponent,\n HomeComponent,\n CourseComponent,\n CoursesCardListComponent,\n CourseDialogComponent,\n AboutComponent,\n\n ],\n imports: [\n BrowserModule.withServerTransition({ appId: 'serverApp' }),\n //BrowserTransferStateModule,\n BrowserAnimationsModule,\n MatMenuModule,\n MatButtonModule,\n MatIconModule,\n MatCardModule,\n MatTabsModule,\n MatSidenavModule,\n MatListModule,\n MatToolbarModule,\n MatInputModule,\n MatTableModule,\n MatPaginatorModule,\n MatSortModule,\n MatProgressSpinnerModule,\n MatDialogModule,\n AppRoutingModule,\n MatSelectModule,\n MatDatepickerModule,\n ReactiveFormsModule,\n HttpClientModule\n ],\n providers: [\n CoursesService,\n CourseResolver\n ],\n bootstrap: [AppComponent]\n})\nexport class AppModule {\n}\n\npackage.json\n{\n \"name\": \"angular-universal-course\",\n \"version\": \"0.0.0\",\n \"scripts\": {\n \"ng\": \"ng\",\n \"start\": \"ng serve\",\n \"build\": \"ng build\",\n \"test\": \"ng test\",\n \"lint\": \"ng lint\",\n \"e2e\": \"ng e2e\",\n \"serve:prerender\": \"http-server -c-1 dist/angular-universal-course/browser\",\n \"dev:ssr\": \"ng run angular-universal-course:serve-ssr\",\n \"serve:ssr\": \"node dist/angular-universal-course/server/main.js\",\n \"build:ssr\": \"ng build --configuration production && ng run angular-universal-course:server:production\",\n \"prerender\": \"ng run angular-universal-course:prerender --routes routes.txt\"\n },\n \"private\": true,\n \"dependencies\": {\n \"@angular/animations\": \"^16.0.0\",\n \"@angular/cdk\": \"^16.0.0\",\n \"@angular/common\": \"^16.0.0\",\n \"@angular/compiler\": \"^16.0.0\",\n \"@angular/core\": \"^16.0.0\",\n \"@angular/forms\": \"^16.0.0\",\n \"@angular/material\": \"^16.0.0\",\n \"@angular/platform-browser\": \"^16.0.0\",\n \"@angular/platform-browser-dynamic\": \"^16.0.0\",\n \"@angular/platform-server\": \"^16.0.0\",\n \"@angular/router\": \"^16.0.0\",\n \"@nguniversal/express-engine\": \"^16.0.0\",\n \"@types/express\": \"^4.17.8\",\n \"express\": \"^4.15.2\",\n \"rxjs\": \"~7.8.0\",\n \"tslib\": \"^2.3.0\",\n \"zone.js\": \"~0.13.0\"\n },\n \"devDependencies\": {\n \"@angular-devkit/build-angular\": \"^16.0.0\",\n \"@angular/cli\": \"^16.0.0\",\n \"@angular/compiler-cli\": \"^16.0.0\",\n \"@nguniversal/builders\": \"^16.0.0\",\n \"@types/jasmine\": \"~3.8.0\",\n \"@types/jasminewd2\": \"~2.0.3\",\n \"@types/node\": \"^14.15.0\",\n \"http-server\": \"^14.0.0\",\n \"jasmine-core\": \"~3.8.0\",\n \"jasmine-spec-reporter\": \"~5.0.0\",\n \"karma\": \"~6.3.2\",\n \"karma-chrome-launcher\": \"~3.1.0\",\n \"karma-coverage-istanbul-reporter\": \"~3.0.2\",\n \"karma-jasmine\": \"~4.0.0\",\n \"karma-jasmine-html-reporter\": \"^1.5.0\",\n \"ts-node\": \"~8.3.0\",\n \"typescript\": \"~5.0.4\"\n }\n}"} +{"id": "000009", "text": "When implementing a ControlValueAccessor I need to dynamically display some content based on whether or not the control is required. I know I can do this to get the control:\nreadonly #control = inject(NgControl, {self: true})\nprotected parentRequires = false\n\nngOnInit(): void {\n this.#control.valueAccessor = this\n\n this.parentRequires = this.#control.control?.hasValidator(Validators.required) ?? false\n}\n\nbut that only checks to see if it's currently required. What I am not seeing though is how to detect changes. The parent is going to toggle the required attribute on/off based on other actions in the application.\nI'm looking for something like the non-existent this.#control.control.validatorChanges"} +{"id": "000010", "text": "I try to get sorted data from MatTableDataSource using this code:\nthis.source = this.dataSource.sortData(this.dataSource.filteredData,this.dataSource.sort);\n\nbut I got this error:\n\nArgument of type 'MatSort | null' is not assignable to parameter of type 'MatSort'.Type 'null' is not assignable to type 'MatSort\n\nI am using Angular 16."} +{"id": "000011", "text": "So I just updated my project from Angular v15 to v16, and suddenly I get a lot of missing imports errors thrown, such as error NG8001: 'mat-icon' is not a known element but I have imported everything accordingly to documentation in my app.module.ts:\nimport {MatIconModule} from '@angular/material/icon';\n\n@NgModule({\n declarations: [...],\n imports: [..., MatIconModule, ...],\n bootstrap: [AppComponent],\n})\nexport class AppModule {}\n\nOr am I missing something in my package.json? I have tried to update everything according to docs:\n \"dependencies\": {\n \"@angular-devkit/core\": \"^16.2.0\",\n \"@angular-devkit/schematics\": \"^16.2.0\",\n \"@angular/animations\": \"~16.2.1\",\n \"@angular/cdk\": \"^16.2.1\",\n \"@angular/common\": \"~16.2.1\",\n \"@angular/compiler\": \"~16.2.1\",\n \"@angular/core\": \"~16.2.1\",\n \"@angular/forms\": \"~16.2.1\",\n \"@angular/material\": \"^16.2.1\",\n \"@angular/platform-browser\": \"^16.2.1\",\n \"@angular/platform-browser-dynamic\": \"~16.2.1\",\n \"@angular/router\": \"~16.2.1\",\n \"bootstrap\": \"^4.4.1\",\n \"moment\": \"^2.26.0\",\n \"popper.js\": \"^1.16.0\",\n \"rxjs\": \"^6.5.5\",\n \"tslib\": \"^2.0.0\",\n \"xstate\": \"~4.6.7\",\n \"zone.js\": \"~0.13.1\"\n }\n\nI tried deleting node_modules folder and reinstalling, running npm install, and npm ci but nothing has worked till now. I only find the tip to add the missing module to app.module.ts but I have this already, has anyone also run into this problem and found a solution?"} +{"id": "000012", "text": "I just did import { OrderModule } from 'ngx-order-pipe'; in app.module.ts and added it in imports\n imports: [BrowserModule, OrderModule,...],\n\nand when I did ng serve, I am getting below failed to compile error"} +{"id": "000013", "text": "Let me preface this question with the fact that I started learning Angular about a month ago.\nBasically, I have a searchbar component and several different itemcontainer components (each of them displays a different type of item). In an attempt to have access to the serchbar value on any component, I created a searchbarService like so:\nimport { Injectable, signal, WritableSignal } from '@angular/core';\n\n@Injectable({\n providedIn: 'root'\n})\nexport class SearchBarService {\n\n searchTextSignal: WritableSignal = signal('');\n\n setSearch(text: string): void{\n this.searchTextSignal.set(text);\n }\n}\n\nThe searchbar component calls the setSearch method on input submit. So far so good. Now, the problem comes when trying to work with searchTextSignal on the itemcontainter components. I'm trying to use it like this:\nimport { Component, signal} from '@angular/core';\nimport { Factura } from 'src/app/interfaces/factura';\nimport { FacturaService } from 'src/app/services/factura.service'; //gets items from a placeholder array.\nimport { SearchBarService } from 'src/app/services/search-bar.service';\n\n@Component({\n selector: 'vista-facturas',\n templateUrl: './vista-facturas.component.html',\n styleUrls: ['./vista-facturas.component.css']\n})\nexport class VistaFacturasComponent {\n\n facturasArray: Factura[] = []; // has all items\n filteredFacturasArray = signal([]); // has all filtered items, and is the value that I want to get updated when the signal changes.\n\n constructor(private facturaService: FacturaService, public searchBarService: SearchBarService) { }\n\n getFacturas(): void { //initializes the arrays.\n this.facturaService.getFacturas().subscribe(facturasReturned => this.facturasArray = facturasReturned);\n this.filteredFacturasArray.set(this.facturasArray);\n }\n\n filterFacturas(): void{ // this method is likely the problem\n\n let text = this.searchBarService.searchTextSignal();\n\n if (!text) \n this.filteredFacturasArray.set(this.facturasArray);\n \n this.filteredFacturasArray.set(this.facturasArray.filter(factura => factura?.numero.toString().includes(text)));\n }\n\n ngOnInit(): void {\n this.getFacturas();\n }\n}\n\n\nThe templace uses ngFor like so:\n
\n
\n \n
\n
\n\nSo, everything boils down to how to make VistaFacturasComponent call filterFacturas() when searchBarService.searchTextSignal() updates. Any ideas?"} +{"id": "000014", "text": "I have created a custom ui library using only standalone components and here's my public-api.ts file.\n/*\n * Public API Surface of ih-ui-lib\n */\n\nexport * from './lib/ui-lib.service';\nexport * from './lib/ui-lib.component';\nexport * from './lib/ui-lib.module';\n\n// Exporting components\nexport * from './lib/components/card/card.component';\nexport * from './lib/components/card/card-heading/card-heading.component';\nexport * from './lib/components/card/card-content/card-content.component';\nexport * from './lib/components/cards-responsive/cards-responsive.component';\nexport * from './lib/components/collapsible/collapsible.component';\nexport * from './lib/components/heading/heading.component';\nexport * from './lib/components/icon/icon.component';\nexport * from './lib/components/paragraph/paragraph.component';\nexport * from './lib/components/pill/pill.component';\nexport * from './lib/components/scrollbar/scrollbar.component';\nexport * from './lib/components/search/search.component';\nexport * from './lib/components/search/components/search-column/search-column.component';\nexport * from './lib/components/search/components/search-row/search-row.component';\nexport * from './lib/components/status-bar/status-bar.component';\nexport * from './lib/components/timeline/timeline.component';\n\n\nHere's a example of a component:\nimport { Component, Input, OnInit } from '@angular/core';\nimport { CommonModule } from '@angular/common';\n\n@Component({\n selector: 'card',\n standalone: true,\n imports: [CommonModule],\n templateUrl: './card.component.html',\n styleUrls: ['./card.component.css']\n})\nexport class CardComponent implements OnInit {\n\n @Input() classes: string = '';\n @Input() width: string = '';\n @Input() radius: string = 'sm';\n\n constructor() { }\n\n ngOnInit(): void {\n }\n\n}\n\nHere's how I'm adding to my app's package.json\n \"ui-library\": \"git+repo+url.git#branch\",\n\nI also have index.ts file at the root of my lib which just exports the public-api.ts file so I can access it from the root.\nexport * from './dist/ih-ui-lib/public-api';\nI created a new standalone component in my app and tried to import that component into my app.\nAnd that is when I get this error:\nTypeError: Cannot read properties of undefined (reading '\u0275cmp')\nI'm using angular 16.\nI tried using modules for components and still it is the same.\nI tried importing standalone component to a module in my app and it failed to recognise that component."} +{"id": "000015", "text": "I've updated my project to Angular 16. In app.module.ts, I have an array of components named entryComponents. However, the entryComponents is no longer available in Angular 16. Where should I add these components to my project:\nentryComponents:[\n PayResultDialogComponent,\n MessageBoxComponent\n ],"} +{"id": "000016", "text": "After Angular CanActivate interface became deprecated, I've changed my guards for simple const functions based on official documentation.\nFor example here is my inverseAuthGuard method, which seems working correctly:\nexport const inverseAuthGuard = (): boolean => {\n const authService = inject(AuthService);\n const router = inject(Router);\n if (authService.isAuthenticated()) {\n router.navigate(['/visual-check']);\n return false;\n }\n return true;\n};\n\nMy problem is that, I want to write some unit tests for it and I don't know how can I inject a mock authService and a mockRouter into this function. I've watched this video, which explains how can I inject mock services into a class, but for my guard function I couldn't make it working.\nI have tried some ways, but I couldn' find any solution.\nIf I do this way:\n\ndescribe('InverseAuthGuard', () => {\n beforeEach(() => {\n TestBed.configureTestingModule({\n imports: [HttpClientTestingModule, RouterTestingModule],\n providers: [\n { provide: AuthService, useValue: AuthService },\n { provide: Router, useValue: Router },\n ],\n });\n });\n\n fit('should return true on not authenticated user', () => {\n const result = inverseAuthGuard();\n expect(result).toBe(true);\n });\n});\n\nI've got the following error:\nNG0203: inject() must be called from an injection context such as a constructor, a factory function, a field initializer, or a function used with `runInInjectionContext`\n\nIf I do that way, what I saw in the video:\ndescribe('InverseAuthGuard', () => {\n const setupWithDI = (authService: unknown, router: unknown) =>\n TestBed.configureTestingModule({\n providers: [\n { provide: AuthService, useValue: authService },\n { provide: Router, useValue: router },\n ],\n }).inject(inverseAuthGuard);\n\n beforeEach(() => {\n TestBed.configureTestingModule({\n imports: [HttpClientTestingModule, RouterTestingModule],\n });\n });\n\n fit('should return true on not authenticated user', () => {\n const mockAuthService: unknown = { isAuthenticated: () => true };\n const mockRouter: Router = jasmine.createSpyObj(['navigate']);\n setupWithDI(mockAuthService, mockRouter);\n const result = inverseAuthGuard();\n expect(result).toBe(true);\n });\n});\n\nI've got the following error:\nNullInjectorError: No provider for inverseAuthGuard!\n\nOf course, I've tried providing inverseAuthGuard somehow, but without any success.\nI think there should be an easy solution for it, but I didn't find in any documentation. I will be thanksful for any answer."} +{"id": "000017", "text": "My student is asking me : << why should I inject stuff inside the constructor instead of injecting directly in the attribute of the class ?\nWhat I teach to her :\nUse injection inside the constructor\nhousingLocationList: HousingLocation[] = [];\nhousingService: HousingService = inject(HousingService);\n\nconstructor() {\n this.housingLocationList = this.housingService.getAllHousingLocations();\n}\n\nWhat She wants to do :\nInject the housing service directly inside the class attribute\n@Component({\n//...\n})\nexport class HomeComponent {\n\n housingService: HousingService = inject(HousingService);\n housingLocationList: HousingLocation[] = this.housingService.getAllHousingLocations();\n \n constructor() {}\n}\n\nWhat should I answer to her ?\nWhat I tried :\nI tried to convice her that it's a dogma and she should not think about it and just do it like that :)\nWhat I expected :\nShe accept my answer\nWhat actually resulted:\nShe still wants to know"} +{"id": "000018", "text": "I am trying to populate mat-table via dynamic data from an API.\nData is getting populated but pagenation part is unresponsive.\nI tried solutions provided in below links on Stackoverflow, non of them worked. I am using Angular 16 and angular material 16.2.10\nSolution1\nSolution2\nSolution3\nSolution4\nSolution5\nPFB my code:\nComponent.ts:\n\n\nimport { HttpClient } from '@angular/common/http';\nimport { Component, OnInit, ViewChild, AfterViewInit, ChangeDetectorRef } from '@angular/core';\nimport {MatPaginator, MatPaginatorModule} from '@angular/material/paginator';\nimport {MatTableDataSource, MatTableModule} from '@angular/material/table';\n\n@Component({\n selector: 'app-test-api',\n templateUrl: './test-api.component.html',\n styleUrls: ['./test-api.component.css'],\n standalone: true,\n imports: [MatTableModule, MatPaginatorModule]\n})\nexport class TestAPIComponent implements OnInit, AfterViewInit {\n public displayedColumns: string[] = ['id', 'name', 'email', 'city', 'latitude'];\n public getJsonValue: any;\n public dataSource: any = [];\n//code for pagination: tried all solutions from stackoverflow , none worked\n@ViewChild(MatPaginator) paginator: MatPaginator;\n //@ViewChild(MatPaginator, {read: true}) paginator: MatPaginator;\n \n /*@ViewChild(MatPaginator, {static: false})\n set paginator(value: MatPaginator) {\n this.dataSource.paginator = value;\n } */\n\n /*@ViewChild(MatPaginator) set matPaginator(mp: MatPaginator) {\n this.paginator = mp;\n this.dataSource.paginator = this.paginator;\n}*/\n \n constructor(private http : HttpClient){\n\n }\n ngOnInit(): void {\n this.getMethod();\n //this.cdr.detectChanges();\n }\n\n public getMethod(){\n this.http.get('https://jsonplaceholder.typicode.com/users').subscribe((data) => {\n console.table(data);\n console.log(this.paginator);\n this.getJsonValue = data;\n this.dataSource = data;\n //tried below code from stackoverflow but didn't work and commented ngAfterViewInit code\n this.dataSource.paginator = this.paginator;\n });\n \n }\n\n ngAfterViewInit() {\n //this.dataSource.paginator = this.paginator;\n } \n}\n\n\n\nHTML:\n\n\n

test-api works!

\n

Test API

\n\n\n

Test API: Dynamic

\n

dynamic-table works!

\n\n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n
Id {{element.id}} Name {{element.name}} email {{element.email}} city {{element.address.city}} Latitude {{element.address.geo.lat}}
\n \n \n \n
\n \n\n\n\nCurrent Table UI with disabled pagination:"} +{"id": "000019", "text": "I have this arrangement of the components, such that a component called landing-home.component loads another component client-registration-form.component using ViewContainerRef, on an , rendering on ngAfterViewInit.\nThe component client-registration-form.component represents a form with input fields. This component has a subject as\nmessageSource = new BehaviorSubject(new ClientRegistrationModel(..))\n\nwhich is the form's input data. I want to capture this data in the parent component landing-home.component.\nclient-registration-form.component.html\n
\n
\n First Name\n \n
\n \n
\n \n
\n
\n\nclient-registration-form.component.ts\nimport { Component, Injectable } from '@angular/core';\nimport { BehaviorSubject } from 'rxjs';\nimport {ClientRegistrationModel} from '../models/client-registration.model';\n\n@Component({\n selector: 'app-client-registration-form',\n templateUrl: './client-registration-form.component.html'\n})\n@Injectable()\nexport class ClientRegistrationFormComponent {\n clientRegistrationMoel : ClientRegistrationModel = new ClientRegistrationModel(\"\",\"\",\"\",\"\");\n constructor() {}\n private messageSource = new BehaviorSubject(new ClientRegistrationModel(\"\",\"\",\"\",\"\"));\n public currentMessage = this.messageSource.asObservable();\n\n OnSubmit()\n {\n this.messageSource.next(this.clientRegistrationMoel);\n }\n}\n\nlanding-home.component.html\n
\n \n
\n\n\nlanding-home.component.js\nimport { Component, ViewChild, ViewContainerRef, Input, ChangeDetectorRef } from '@angular/core';\nimport {ClientRegistrationFormComponent} from '../client-registration-form/client-registration-form.component';\nimport {ClientRegistrationModel} from '../models/client-registration.model';\n\n@Component({\n selector: 'app-landing-home',\n templateUrl: './landing-home.component.html'\n})\n\nexport class LandingHomeComponent {\n @ViewChild('container', {read: ViewContainerRef}) container!: ViewContainerRef;\n constructor(private clientRegistrationFormComponent: ClientRegistrationFormComponent,\n private changeDetector: ChangeDetectorRef){}\n\n registrationDetails : ClientRegistrationModel = new ClientRegistrationModel('','','','');\n \n ngAfterViewInit()\n {\n // some condition\n this.container.createComponent(ClientRegistrationFormComponent);\n this.changeDetector.detectChanges(); \n }\n}\n\nWhat I am trying to achieve here is that I have a list of child components. Child component A, B, C etc. and a parent component P. The appropriate child will be loaded based on certain condition along with while loading the parent P. Now I want a way to transfer data such as form input (or may be just a boolean flag informing the parent that the form of the child is submitted successfully or failed over a REST call in the child) from the currently loaded child A or B or C.\nThe above code is just a try to find a way to do this and not necessarily has to follow the same structure but importantly I have a long list of child components and do not want to add those with *ngIf.\nLet me know if there is a better approach for such scenario."} +{"id": "000020", "text": "I'm migrating from angular 16 to 17 and I encountered the issue that I need to replace all the usages of *ngFor and *ngIf and ngSwitch with the new syntax (@for and @if and @switch).\nI know the v17 still supports the old syntax but is there a way to migrate them or a regex to replace them with the new form?"} +{"id": "000021", "text": "I am unable to add a scrollOffset option to my Angular 17 bootstrap config.\nBefore Angular 17, you'd have an app module that imports a routing module as such:\nimport { NgModule } from '@angular/core';\nimport { PreloadAllModules, RouterModule, Routes } from '@angular/router';\n\nconst routes: Routes = [\n {\n path: '',\n component: HomeComponent,\n },\n];\n\n@NgModule({\n imports: [\n RouterModule.forRoot(routes, {\n initialNavigation: 'enabledBlocking',\n scrollPositionRestoration: 'enabled',\n anchorScrolling: 'enabled',\n scrollOffset: [0, 100],\n preloadingStrategy: PreloadAllModules,\n }),\n ],\n exports: [RouterModule]\n})\nexport class AppRoutingModule { }\n\nIn Angular 17, you now pass a config object to the bootstrapApplication function, and I am unable to find a way to add the scrollOffset config as before (see above):\n// main.ts\n\nimport { bootstrapApplication } from '@angular/platform-browser';\nimport { appConfig } from './app/app.config';\nimport { AppComponent } from './app/app.component';\n\nbootstrapApplication(AppComponent, appConfig)\n .catch((err) => console.error(err));\n\n// app.config.ts\nimport { ApplicationConfig } from '@angular/core';\nimport { withInMemoryScrolling } from '@angular/router';\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideRouter(\n routes,\n withInMemoryScrolling({\n scrollPositionRestoration: 'enabled',\n anchorScrolling: 'enabled',\n }),\n //\u00a0Where can I put my scrollOffset???\n ),\n ],\n};"} +{"id": "000022", "text": "I have an angular project using ng2-right-click-menu for context menu\nSince with Angular 16 its not compatible, i have to switch to an alternative solution\nWhen i searched for Angular material menu\n \n\ncame across Angular CDK menu.\nimport {CdkMenuModule} from '@angular/cdk/menu;\n\nConfused which one to use for my application.\nAs the menu should be customizable."} +{"id": "000023", "text": "I\u2019m trying to use the new angular signal effect to listen on changes for a signal array of objects.\nBut the effect doesn\u2019t get called at any time.\nIt gets only called if I filter out one object of the array.\nPushing and updating an object of the array doesn\u2019t call the effect.\nHere\u2019s my code:\n// this code trigger the effect\ncase \"DELETE\":\n this.houses.update(((houses: House[]) => houses.filter((house: House) => house.id !== payload.old.id)));\n break;\n// this code doesn\u2019t trigger the effect\n case \"INSERT\":\n this.houses.update((houses: House[]) => {const updatedHouses = houses.push(payload.new); return updatedHouses;})\n break;\n\neffect(() => {\n this.filteredHouses = this.houses();\n this.onFilter();\n });\n\nObey of I reset the value of the signal and set the new value afterwards, the effect will be called. What am I doing wrong?"} +{"id": "000024", "text": "Attempting to implement a guard with Okta authentication in Angular v17, I encountered an error indicating a lack of provider for the OktaAuthStateService.\nUpon logging in through Okta, I gain access to the login status component. However, when attempting to navigate to the dashboard using routes, I encounter an error related to the absence of a provider, specifically the OktaAuthStateService.\nauth.guard.ts\nimport { Router, UrlTree, } from '@angular/router';\nimport { Injectable } from '@angular/core';\nimport { OktaAuthStateService } from '@okta/okta-angular';\nimport { Observable, map, take } from 'rxjs';\n\n@Injectable({ providedIn: 'root' }) export class AuthGuard { constructor( public authStateService: OktaAuthStateService, private router: Router ) {}\n\ncanActivate(): Observable { \nreturn this.authStateService.authState$.pipe( map((loggedIn) => { console.log('loggedIn', loggedIn);\nif (!loggedIn) {\n this.router.navigate(['/login']);\n return false;\n }\n return true;\n }),\n take(1)\n);\n} }\n\napp.module.ts\n\nimport { NgModule } from '@angular/core';\nimport { AppComponent } from './app.component';\nimport { OktaAuthModule, OKTA_CONFIG } from '@okta/okta-angular';\nimport { OktaAuth } from '@okta/okta-auth-js';\nimport myAppConfig from './app.config';\nconst oktaConfig = myAppConfig.oidc;\nconst oktaAuth = new OktaAuth(oktaConfig);\n\n@NgModule({\ndeclarations: [],\nimports: [OktaAuthModule],\nproviders: [{ provide: OKTA_CONFIG, useValue: { oktaAuth } }],\nbootstrap: [AppComponent],\n})\nexport class AppModule {}\n\napp.routes.ts\n\nimport { Routes, mapToCanActivate } from '@angular/router';\nimport { OktaCallbackComponent, OktaAuthGuard } from '@okta/okta-angular';\nimport { LoginComponent } from './modules/auth/components/login/login.component';\nimport { AuthGuard } from './core/guards/auth.guard';\nimport { DashboardComponent } from './modules/pages/dashboard/dashboard.component';\nimport { LoginStatusComponent } from './modules/auth/components/login-status/login-status.component';\nimport { CommonGuard } from './core/guards/common.guard';\n\nexport const routes: Routes = [\n{\npath: '',\nredirectTo: 'login',\npathMatch: 'full',\n},\n{\npath: 'login',\ncomponent: LoginComponent,\n},\n{ path: 'login-status', component: LoginStatusComponent },\n{ path: 'implicit/callback', component: OktaCallbackComponent },\n{\npath: 'dashboard',\ncanActivate: mapToCanActivate([AuthGuard]),\ncomponent: DashboardComponent,\n},\n];"} +{"id": "000025", "text": "Since angular now has stanalone components, how do we show one comonent inside another like we used to. e.g inside app body\nI dont have any idea about how standalone components work and i'm a fresher in angular just migrated from angular 12 to angular 17."} +{"id": "000026", "text": "I am following the docs of angular from Angular Guard\nBelow is my Guest Guard Code. The logic is to check if the user is available or not,\nif available, redirect to dashboard else proceed to login page.\nimport { CanActivateFn } from '@angular/router';\nimport { Injectable } from '@angular/core';\n\n\n@Injectable()\n\nclass PermissionsService {\n canActivate(): boolean {\n return false;\n }\n\n}\n\nexport const guestGuard: CanActivateFn = (route, state) => {\n return inject(PermissionsService).canActivate();\n};\n\nBut this code throws error as\n[ERROR] TS2304: Cannot find name 'inject'. [plugin angular-compiler]\n\nsrc/app/guards/guest.guard.ts:15:13:\n 15 \u2502 return inject(PermissionsService).canActivate();"} +{"id": "000027", "text": "I have been googling this an there are many versions, most are old.\nI have an angular 16 project which was not made with standalone components but I've created this 1 standalone component which I want to load as a dialog.\nMy question is, in angular 16, how do I go about loading a standalone component without the use of routing or preloading it?\nCan it be done?\nAny guidance would be appreciated as there's just too many versions on the internet."} +{"id": "000028", "text": "Angular is failing to compile because of the following error and I'm really confused as to why.\nerror TS2322: Type 'string' is not assignable to type 'MenuItem'.\n\n4 \n ~~~~\n\n apps/angular-monorepo/src/app/app.component.ts:10:16\n 10 templateUrl: './app.component.html',\n ~~~~~~~~~~~~~~~~~~~~~~\n Error occurs in the template of component AppComponent.\n\nWhy is it complaining that item is of type 'string' when I specified that item is of type MenuItem or undefined\n@Component({\n standalone: true,\n imports: [NxWelcomeComponent, RouterModule, MenuItemComponent],\n selector: 'angular-monorepo-root',\n templateUrl: './app.component.html',\n styleUrl: './app.component.scss',\n})\nexport class AppComponent {\n title = 'angular-monorepo';\n menu: MenuItem[] = [\n {id: 101,category: 'appetizer', name:'french toast', price: 10.00},\n {id: 201,category: 'entree',sub_category:\"rice\",name:'pork', price: 10.00},\n {id: 301,category: 'drinks',name:'tea', price: 10.00},\n {id: 401,category: 'dessert',name:'affogato', price: 10.00},\n ]\n}\n//app.component.html\n\n

POS

\n@for (item of menu; track item.id) {\n \n}\n\n\n\n@Component({\n selector: 'menu-item',\n standalone: true,\n imports: [CommonModule],\n templateUrl: './menu-item.component.html',\n styleUrl: './menu-item.component.scss',\n})\nexport class MenuItemComponent {\n constructor() {\n\n }\n @Input({required: true}) item: MenuItem | undefined = undefined;\n \n @Output() itemSelectedEvent = new EventEmitter();\n}\n\n//menu-item.component.html\n@if (item) {\n

{{item.name}}

\n}\n\n\n//menu-item.type.ts\nexport type MenuItem = {\n id: number,\n category: ItemCategory,\n sub_category?: string,\n name: string,\n price: number\n }\n\nI expect item would be of type MenuItem like I specified"} +{"id": "000029", "text": "Example:\nhttps://stackblitz.com/edit/myxj6y?file=src%2Fexample%2Fsnack-bar-overview-example.ts\nI tried the class in styles.scss, with ng-deep, overriding the component's root class and it still doesn't work. I'm not using standalone components. What is wrong with the code ?\n\"@angular/material\": \"^16.2.9\",\n\"@angular/common\": \"^16.2.0\","} +{"id": "000030", "text": "I have updated my Angular app to version 16 and now in older browsers I am getting the error which says \"SyntaxError: private fields are not currently supported.\"\nI am trying to use polyfills to support modern browser features in the older browsers.\nHere is the polyfills.ts file:\nimport 'core-js';\nimport 'core-js/stable';\nimport 'regenerator-runtime/runtime';\n\nimport 'zone.js';\n\nThis is the tsconfig.json\n\"compilerOptions\": {\n \"target\": \"es2015\"\n },\n\nThis is the error on Firefox (v75):"} +{"id": "000031", "text": "after migrating to new angular 17 and updating my template, ng serve throws this message\nNG5002: Cannot parse expression. @for loop expression must match the pattern \" of\n\""} +{"id": "000032", "text": "I'm using Angular material's AutoComplete as follows\n
\n \n User\n \n \n @if(userCtrl.value?.length < 3) { \n
Type 3 or more characters
\n } @else if(isLoading) {\n
loading...
\n } @else if (!(filteredOptions | async)?.length) {\n
No match found
\n }\n @for (option of filteredOptions | async; track option) {\n {{option}}\n }\n
\n
\n
\n\nThis is more or less a copy-past from one of their examples.\nBut what I would like to add is text above the options (inside the overlay) if there are no options (yet)\n@if(userCtrl.value?.length < 3) { \n
Type 3 or more characters
\n} @else if(isLoading) {\n
loading...
\n} @else if (!(filteredOptions | async)?.length) {\n
No match found
\n}\n\nHowever, the overlay is closed when there are 0 options. Is there a way, such that I can show/activate the overlay when the input has focus and 0 options (empty array)? But when the user select an options, the focus is lost and the overlay closes\nDEMO"} +{"id": "000033", "text": "I'm developing a solution Angular 16 Material using the free theme Matero.\nI started from the downloadable demo so Angular Core ^16.2.7 etc.(https://github.com/ng-matero/ng-matero/blob/main/package.json), deleting the unuseful demo parts.\nI'm facing a problem with subscribing after a http call, i need to declare\n@Component({\n ...\n changeDetection: ChangeDetectionStrategy.OnPush,\n})\n\nin the constructor\nexport class LoginComponent {\n isSubmitting = false;\n ...\n constructor(\n ...\n private ref: ChangeDetectorRef,\n ) {}\n\nand finally after a call for example a login\n this.isSubmitting = true;\n this.auth\n .login(this.username.value, this.password.value, this.rememberMe.value)\n .pipe(filter(authenticated => authenticated))\n .subscribe({\n next: () => {\n this.isSubmitting = false; \n this.ref.markForCheck();\n },\n\nin the html for example a button\n\n\nWith the previous code, pratically it should happens nothing (there is no router redirection) but by clicking the button the spinner inside the button should cease to spin and appear back the \"Login\" text.\nBut this happens because the \"this.ref.markForCheck();\" otherwise without this call it ignores the change of isSubmitting and the spinner remain here.\nThe same for an http call ( normal call with HttpClient that returns an Observable) with binding to a mtx-grid, the binding succeed only by calling \"this.ref.markForCheck();\" in the \"subscribe\".\nAngular CLI is 16.2.10\nWhat i'm doing in a wrong manner ?"} +{"id": "000034", "text": "The first step\n ng add @angular-eslint/schematics\n\nexecutes successfully but the second step\n ng g @angular-eslint/schematics:convert-tslint-to-eslint\n\nproduces this error:\n Error: The `convert-tslint-to-eslint` schematic is no longer supported.\n\n Please see https://github.com/angular-eslint/angular-eslint/blob/main/docs/MIGRATING_FROM_TSLINT.md\n\nand the readme document referenced in the error message is the one I was following to attempt this migration.\nI successfully used this schematic about about three weeks ago.\nHas anyone else encountered this error message? Know of a workaround?\nAngular CLI: 17.0.7\nNode: 18.13.0\nPackage Manager: npm 8.19.3\nOS: darwin x64\n\nAngular: 17.0.7\n... animations, cli, common, compiler, compiler-cli, core, forms\n... language-service, localize, platform-browser\n... platform-browser-dynamic, router\n\nPackage Version\n---------------------------------------------------------\n@angular-devkit/architect 0.1700.7\n@angular-devkit/build-angular 17.0.7\n@angular-devkit/core 17.0.7\n@angular-devkit/schematics 17.0.7\n@schematics/angular 17.0.7\nrxjs 7.8.1\ntypescript 5.2.2\nzone.js 0.14.2"} +{"id": "000035", "text": "I have an Angular 17 application which uses standalone components, the initial routes are set up like so in app.routes.ts\nexport const appRoutes: Array = [\n { path: '', redirectTo: '/dashboard', pathMatch: 'full' },\n {\n path: 'login',\n component: LoginComponent,\n title: 'Login',\n },\n {\n path: '',\n canActivateChild: [AuthGuard],\n loadChildren: () => import(`./app-authorized.routes`).then((r) => r.appAuthorizedRoutes),\n },\n { path: '**', redirectTo: '/dashboard' },\n];\n\nOnce the user logs in they are authorized and redirected to /dashboard, and the app-authorized.routes.ts routes are loaded. Here is what that file looks like:\nexport const appAuthorizedRoutes: Array = [\n {\n path: 'dashboard',\n component: DashboardComponent,\n canActivate: [AuthGuard],\n title: 'Dashboard',\n },\n {\n path: 'settings',\n component: SettingsComponent,\n canActivate: [AuthGuard],\n title: 'Settings',\n },\n //etc...\n];\n\nAn issue I have is that after logging in, there is a delay as the data loads and the UI looks strange. I have a navigation bar set to appear when authorized, which shows but the login component is also still showing - which is wrong.\nAfter logging in and while the lazy-loaded chunks are loading, is there a way to display this progress in the UI somehow?"} +{"id": "000036", "text": "I have an Angular 17 application using standalone components, the initial routes are set up like so in app.routes.ts\nexport const appRoutes: Array = [\n { path: '', redirectTo: '/dashboard', pathMatch: 'full' },\n {\n path: 'login',\n component: LoginComponent,\n title: 'Login',\n },\n {\n path: '',\n canActivateChild: [AuthGuard],\n loadChildren: () => import(`./app-authorized.routes`).then((r) => r.appAuthorizedRoutes),\n },\n { path: '**', redirectTo: '/dashboard' },\n];\n\nOnce the user logs in they are authorized and redirected to /dashboard, and the app-authorized.routes.ts routes are loaded. Here is what that file looks like:\nexport const appAuthorizedRoutes: Array = [\n {\n path: 'dashboard',\n component: DashboardComponent,\n canActivate: [AuthGuard],\n title: 'Dashboard',\n },\n {\n path: 'settings',\n component: SettingsComponent,\n canActivate: [AuthGuard],\n title: 'Settings',\n },\n //etc...\n];\n\nI have several services that can only be used once the user logs in, but currently upon inspecting the chunked files Angular loads, all of the services are loaded initially at the login page. Of course this makes sense because they are decorated with\n@Injectable({\n providedIn: 'root',\n})\n\nModules of course would make this easy, but since I'm not using modules how do I tell my application to include only certain services along with the lazy-loaded routes, or just any way after the user logs in?"} +{"id": "000037", "text": "I am migrating old angular project to latest angular 17. I changed class based auth guard to functional auth guard. The problem I am having is that I get this error:\nERROR NullInjectorError: NullInjectorError: No provider for _UserService!\nat NullInjector.get (core.mjs:5626:27)\nat R3Injector.get (core.mjs:6069:33)\nat R3Injector.get (core.mjs:6069:33)\nat injectInjectorOnly (core.mjs:911:40)\nat \u0275\u0275inject (core.mjs:917:42)\nat inject (core.mjs:1001:12)\nat authGuard (auth.guard.ts:6:23)\nat router.mjs:3323:134\nat runInInjectionContext (core.mjs:6366:16)\nat router.mjs:3323:89\n\nHere is my authGuard code:\nimport {CanActivateFn, Router} from '@angular/router';\nimport {inject} from \"@angular/core\";\nimport {UserService} from \"../users/user.service\";\n\nexport const authGuard: CanActivateFn = (route, state) => {\n const userService = inject(UserService);\n const router = inject(Router);\n\n if (!userService.is_authenticated()) {\n router.navigate(['login', state.url]);\n return false;\n }\n return true;\n};\n\nHere is part of my UserService:\nimport {Injectable} from '@angular/core';\nimport { JwtHelperService } from '@auth0/angular-jwt';\nimport {HttpClient} from '@angular/common/http';\n\n@Injectable()\nexport class UserService {\n private usersUrl = '/users/';\n\n constructor(private http: HttpClient,\n private jwtHelper: JwtHelperService) { }\n\n ...\n\n public is_authenticated(): boolean {\n const token = localStorage.getItem('token');\n // Check whether the token is expired and return\n // true or false\n return !this.jwtHelper.isTokenExpired(token);\n }\n}\n\nAs I understand the documentation I don't need to provide UserService anywhere. Using 'inject' should be enough. What am I doing wrong?"} +{"id": "000038", "text": "I apologize in advance if i am asking too stupid questions but i am really new to angular and i do not understand how to handle a JSON object coming from the server and convert that object into a custom datatype so i can use that data to render on html using ngFor.\nI have tried multiple things but nothing seems to work. Any help will be very much appreciated.\nP.S. please excuse me for the extremely simple html page, application is coming up from scratch and i am working on functionalities and backend server connections before working on the designs.\nBelow is the Code and screenshots attached.\nEmployee.Component.html\n\n

Inside Employee Component.

\n\n\n
\n

Employee List

\n {{ employees }}\n
\n\nemployee.component.ts file\n\nemployees: any;\n\n constructor(private service: EmployeeServiceService){\n }\n ngOnInit(){\n\n }\n\n public getAllEmployees1(){\n this.service.getAllEmployees().subscribe((data)=>{\n\n this.employees = data;\n console.log(\"Response: \"+ this.employees);\n },\n (error) =>{\n console.error('Error fetching employees: ', error);\n }\n );\n }\n\nEmployeeService file:\n\n@Injectable({\n providedIn: 'root'\n})\nexport class EmployeeServiceService {\n\n constructor(private http:HttpClient) { }\n\n getAllEmployees(){\n console.log(\"Inside get ALL Employees Method.\");\n return this.http.get(\"https://localhost:9090/employee/getAllEmployees\",{responseType:'text' as 'json'});\n }\n\nEmployee class type:\n\nexport class AddEmployee{\n firstName: any;\n lastName: any;\n address:any;\n project:any\n ssn:any;\n joinDate:any;\n status:any\n\n constructor(\n firstName: string,\n lastName: string,\n address:string,\n project:string,\n ssn:number,\n joinDate:Date,\n status:number\n ){}\n }\n\nI wanted to convert the JSON data coming from the server to AddEmployee type and then run a ngFor loop in the html so i can put everything in the tabular format.\nBut angular keeps on complaining that the data i am getting is in Object Format and ngFor can only be used on observables and iterators. Below is the image attached of how the object gets pulled from server and when i click on getAllEmployees button, it just renders the object itself. I am not able to print the data if i dont call {{ employees }} directly.\nenter image description here\nError Page:"} +{"id": "000039", "text": "I want to create a dynamic form that is an array of payments, the user can add a new payment, delete from the array, and edit.\nMy HTML:\n
\n \n @for (\n createLoanPaymentForm of createLoanPaymentsForm.controls; // here is the error\n track $index\n ) {\n
\n \n \n \n
\n }\n \n
\n
\n\nThe configuration of my component:\n@Component({\n selector: 'app-create-loan-dialog',\n standalone: true,\n imports: [\n MatInputModule,\n MatButtonModule,\n MatDialogTitle,\n MatDialogContent,\n MatDialogActions,\n MatDialogClose,\n ReactiveFormsModule,\n MatStepperModule,\n ],\n providers: [\n {\n provide: STEPPER_GLOBAL_OPTIONS,\n useValue: { showError: true },\n },\n ],\n templateUrl: './create-loan-dialog.component.html',\n})\n\nMy FormGroup:\ncreateLoanPaymentsForm: FormGroup = this.formBuilder.group({\n payments: this.formBuilder.array([]),\n});\n\nThere is an error in my loop, it says:\n\nType '{ [key: string]: AbstractControl; }' must have a 'Symbol.iterator' method that returns an iterator.\n\nThe solution for this bug, possible the correct configuration for a FormArray loop in Angular 17"} +{"id": "000040", "text": "I recently Upgraded to Angular to V17 with SSR and when i tried to load page this error is coming. ERROR Error: NullInjectorError: No provider for SocialAuthServiceConfig!\nNote: - I am using Only standalone components (No modules)\nNeed help to resolve this issue\nERROR Error: NullInjectorError: No provider for SocialAuthServiceConfig!\n at t (angular/node_modules/zone.js/fesm2015/zone-error.js:85:33)\n at NullInjector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:5626:27)\n at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\n at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\n at injectInjectorOnly (angular/node_modules/@angular/core/fesm2022/core.mjs:911:40)\n at Module.\u0275\u0275inject (angular/node_modules/@angular/core/fesm2022/core.mjs:917:42)\n at initialState (angular/node_modules/@abacritt/angularx-social-login/fesm2022/abacritt-angularx-social-login.mjs:374:46)\n at eval (angular/node_modules/@angular/core/fesm2022/core.mjs:6189:43)\n at runInInjectorProfilerContext (angular/node_modules/@angular/core/fesm2022/core.mjs:867:9)\n at R3Injector.hydrate (angular/node_modules/@angular/core/fesm2022/core.mjs:6188:17)\n at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6058:33)\n at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\n at ChainedInjector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:15378:36)\n at lookupTokenUsingModuleInjector (angular/node_modules/@angular/core/fesm2022/core.mjs:4137:39)\n at getOrCreateInjectable (angular/node_modules/@angular/core/fesm2022/core.mjs:4185:12) {\n originalStack: 'Error: NullInjectorError: No provider for SocialAuthServiceConfig!\\n' +\n ' at t (angular/node_modules/zone.js/fesm2015/zone-error.js:85:33)\\n' +\n ' at NullInjector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:5626:27)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\\n' +\n ' at injectInjectorOnly (angular/node_modules/@angular/core/fesm2022/core.mjs:911:40)\\n' +\n ' at Module.\u0275\u0275inject (angular/node_modules/@angular/core/fesm2022/core.mjs:917:42)\\n' +\n ' at initialState (angular/node_modules/@abacritt/angularx-social-login/fesm2022/abacritt-angularx-social-login.mjs:374:46)\\n' +\n ' at eval (angular/node_modules/@angular/core/fesm2022/core.mjs:6189:43)\\n' +\n ' at runInInjectorProfilerContext (angular/node_modules/@angular/core/fesm2022/core.mjs:867:9)\\n' +\n ' at R3Injector.hydrate (angular/node_modules/@angular/core/fesm2022/core.mjs:6188:17)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6058:33)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\\n' +\n ' at ChainedInjector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:15378:36)\\n' +\n ' at lookupTokenUsingModuleInjector (angular/node_modules/@angular/core/fesm2022/core.mjs:4137:39)\\n' +\n ' at getOrCreateInjectable (angular/node_modules/@angular/core/fesm2022/core.mjs:4185:12)',\n zoneAwareStack: 'Error: NullInjectorError: No provider for SocialAuthServiceConfig!\\n' +\n ' at t (angular/node_modules/zone.js/fesm2015/zone-error.js:85:33)\\n' +\n ' at NullInjector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:5626:27)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\\n' +\n ' at injectInjectorOnly (angular/node_modules/@angular/core/fesm2022/core.mjs:911:40)\\n' +\n ' at Module.\u0275\u0275inject (angular/node_modules/@angular/core/fesm2022/core.mjs:917:42)\\n' +\n ' at initialState (angular/node_modules/@abacritt/angularx-social-login/fesm2022/abacritt-angularx-social-login.mjs:374:46)\\n' +\n ' at eval (angular/node_modules/@angular/core/fesm2022/core.mjs:6189:43)\\n' +\n ' at runInInjectorProfilerContext (angular/node_modules/@angular/core/fesm2022/core.mjs:867:9)\\n' +\n ' at R3Injector.hydrate (angular/node_modules/@angular/core/fesm2022/core.mjs:6188:17)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6058:33)\\n' +\n ' at R3Injector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:6069:33)\\n' +\n ' at ChainedInjector.get (angular/node_modules/@angular/core/fesm2022/core.mjs:15378:36)\\n' +\n ' at lookupTokenUsingModuleInjector (angular/node_modules/@angular/core/fesm2022/core.mjs:4137:39)\\n' +\n ' at getOrCreateInjectable (angular/node_modules/@angular/core/fesm2022/core.mjs:4185:12)',\n ngTempTokenPath: null,\n ngTokenPath: [\n '_SocialAuthService',\n '_SocialAuthService',\n 'SocialAuthServiceConfig',\n 'SocialAuthServiceConfig'\n ]\n}"} +{"id": "000041", "text": "I am learning Angular multiple content projection from new Angular 17 docs.\nWhen I am using example from doc I am getting error:\nprofile.component.html::\n
\n \n
\n\n
\n \n
\n\n
\n \n
\n\nIn app.component.html::\n\n

Header 1

\n

This is projected content

\n
\n\nI am getting this error::\nNG8001: 'profile-header' is not a known element:\n\nHow can I resolve it?"} +{"id": "000042", "text": "Following is my Standalone api calls containing service:\n\n\nimport { Injectable} from '@angular/core';\nimport { ProductEndPoints } from '../../constants/apiConstants/product-endpoints';\nimport { HttpClient} from '@angular/common/http';\nimport { Observable } from 'rxjs';\nimport { environment } from 'src/environments/environment.development';\nimport { product } from '../../models/product.types';\n@Injectable({\n providedIn: 'root',\n})\nexport class ProductService {\n\n constructor(private http:HttpClient) { }\n\n getAllProducts():Observable{\n return this.http.get(environment.apiUrl+`${ProductEndPoints.getAllProduct}`)\n }\n\n getProductDetailById(id:string):Observable{\n return this.http.get(environment.apiUrl+`${ProductEndPoints}/${id}`)\n }\n}\n\n\n\nI also added this service to target component's providers array.\nThe error I got while injecting it to the standalone component is :\nsrc_app_pages_Product_product-routing_ts.js:2 ERROR Error: Uncaught (in promise): NullInjectorError: R3InjectorError(Standalone[ProductListComponent])[HttpClient -> HttpClient -> HttpClient -> HttpClient]: \n NullInjectorError: No provider for HttpClient!\nNullInjectorError: R3InjectorError(Standalone[ProductListComponent])[HttpClient -> HttpClient -> HttpClient -> HttpClient]: \n NullInjectorError: No provider for HttpClient!\n at NullInjector.get (core.mjs:8890:27)\n at R3Injector.get (core.mjs:9334:33)\n at R3Injector.get (core.mjs:9334:33)\n at R3Injector.get (core.mjs:9334:33)\n at R3Injector.get (core.mjs:9334:33)\n at ChainedInjector.get (core.mjs:14018:36)\n at lookupTokenUsingModuleInjector (core.mjs:4608:39)\n at getOrCreateInjectable (core.mjs:4656:12)\n at \u0275\u0275directiveInject (core.mjs:11801:19)\n at Module.\u0275\u0275inject (core.mjs:848:60)\n at resolvePromise (zone.js:1193:31)\n at resolvePromise (zone.js:1147:17)\n at zone.js:1260:17\n at _ZoneDelegate.invokeTask (zone.js:402:31)\n at core.mjs:10757:55\n at AsyncStackTaggingZoneSpec.onInvokeTask (core.mjs:10757:36)\n at _ZoneDelegate.invokeTask (zone.js:401:60)\n at Object.onInvokeTask (core.mjs:11070:33)\n at _ZoneDelegate.invokeTask (zone.js:401:60)\n at Zone.runTask (zone.js:173:47)"} +{"id": "000043", "text": "I'm quite new to Angular.\nI have this HTML file new-team.component.html:\n\n
\n
\n
\n
\n
\n New team creation\n
\n
\n
\n
\n \n \n
\n
Team Name is required
\n
\n Your team name must be at least 6 characters long and without special characters except -\n
\n
\n
\n
\n \n \n {{item.frenchName}}\n \n \n \n {{item.frenchName}}\n \n \n
\n \n
\n
\n
\n
\n
\n
\n\nand this is my component file:\nimport { Component, OnDestroy, OnInit } from '@angular/core';\nimport { NgForm } from \"@angular/forms\";\nimport { Subscription } from \"rxjs\";\nimport { Country } from 'src/app/_models/country.model';\nimport { CountryService } from 'src/app/_services/country.service';\nimport { User } from \"../../_models/user.model\";\nimport { AuthService } from \"../../_services/auth.service\";\n\n@Component({\n selector: 'app-new-team',\n templateUrl: './new-team.component.html',\n styleUrls: ['./new-team.component.scss']\n})\nexport class NewTeamComponent implements OnInit, OnDestroy {\n user!: User;\n countries: Country[] = [];\n AuthUserSub!: Subscription;\n\n constructor(\n private authService: AuthService,\n private countryService: CountryService\n ) {\n }\n ngOnInit(): void {\n\n this.AuthUserSub = this.authService.AuthenticatedUser$.subscribe({\n next: user => {\n if (user) this.user = user;\n }\n })\n\n this.countryService.getAllCountries().subscribe({\n next: data => {\n this.countries = data;\n this.countries.forEach(element => {\n element.logo = \"/assets/flags/\" + element.logo;\n });\n },\n error: err => console.log(err)\n })\n }\n\n onSubmitNewTeam(formNewTeam: NgForm) {\n console.log(formNewTeam);\n if (!formNewTeam.valid) {\n return;\n }\n }\n\n ngOnDestroy() {\n this.AuthUserSub.unsubscribe();\n }\n}\n\nOn the line where I call the console.log(formNewTeam); on my .ts file I just have the value of the input field, not the value selected into the .\nHow can I send these two values (input field + value of the ) to my backend API?\nBy the way, the Country object contains id, frenchName, and logo.\nI should receive the form with these two values for example: teamName = \"Real Madrid\" and countryId = \"10\"\nThank you in advance."} +{"id": "000044", "text": "I am trying to work with AWS in angular but at the very start after I install AWS-SDK:\nnpm install aws-sdk\n\nAfter adding the below to my file-manager.ts, I am getting errors regarding node and stream.\nimport * as aws from 'aws-sdk';\n\nI added the following as suggested buy the compiler:\nTry `npm i --save-dev @types/node` and then add 'node' to the types field in your tsconfig.\n\nand still getting so many errors."} +{"id": "000045", "text": "I am trying to build Angular 17 application with SSR, using built in i18n mechanism. And I don't get how to configure it to work together.\nv17 is brand new and there are blank spaces in documentation and not a lot of examples over the Internet.\nWhen creating simple application with Angular+SSR it creates server.ts alongside base application\n// imports\n\n// The Express app is exported so that it can be used by serverless Functions.\nexport function app(): express.Express {\n const server = express();\n const serverDistFolder = dirname(fileURLToPath(import.meta.url));\n const browserDistFolder = resolve(serverDistFolder, '../browser');\n const indexHtml = join(serverDistFolder, 'index.server.html');\n\n const commonEngine = new CommonEngine();\n\n server.set('view engine', 'html');\n server.set('views', browserDistFolder);\n\n // Example Express Rest API endpoints\n // server.get('/api/**', (req, res) => { });\n // Serve static files from /browser\n server.get('*.*', express.static(browserDistFolder, {\n maxAge: '1y'\n }));\n\n // All regular routes use the Angular engine\n server.get('*', (req, res, next) => {\n const { protocol, originalUrl, baseUrl, headers } = req;\n\n commonEngine\n .render({\n bootstrap,\n documentFilePath: indexHtml,\n url: `${protocol}://${headers.host}${originalUrl}`,\n publicPath: browserDistFolder,\n providers: [{ provide: APP_BASE_HREF, useValue: baseUrl }],\n })\n .then((html) => res.send(html))\n .catch((err) => next(err));\n });\n\n return server;\n}\n\nfunction run(): void {\n const port = process.env['PORT'] || 4000;\n\n // Start up the Node server\n const server = app();\n server.listen(port, () => {\n console.log(`Node Express server listening on http://localhost:${port}`);\n });\n}\n\nrun();\n\n\nand after building the app it creates the following structure in dist folder:\n# simple-ssr\n\n* [browser/](./simple-ssr/browser)\n * [first/](./simple-ssr/browser/first)\n * [index.html](./simple-ssr/browser/first/index.html)\n * [home/](./simple-ssr/browser/home)\n * [index.html](./simple-ssr/browser/home/index.html)\n * [second/](./simple-ssr/browser/second)\n * [index.html](./simple-ssr/browser/second/index.html)\n * [favicon.ico](./simple-ssr/browser/favicon.ico)\n * [index.html](./simple-ssr/browser/index.html)\n * [main-OUKHBY7S.js](./simple-ssr/browser/main-OUKHBY7S.js)\n * [polyfills-LZBJRJJE.js](./simple-ssr/browser/polyfills-LZBJRJJE.js)\n * [styles-Y4IFJ72L.css](./simple-ssr/browser/styles-Y4IFJ72L.css)\n* [server/](./simple-ssr/server)\n * [chunk-53JWIC36.mjs](./simple-ssr/server/chunk-53JWIC36.mjs)\n * ... other chunks\n * [index.server.html](./simple-ssr/server/index.server.html)\n * [main.server.mjs](./simple-ssr/server/main.server.mjs)\n * [polyfills.server.mjs](./simple-ssr/server/polyfills.server.mjs)\n * [render-utils.server.mjs](./simple-ssr/server/render-utils.server.mjs)\n * [server.mjs](./simple-ssr/server/server.mjs)\n* [3rdpartylicenses.txt](./simple-ssr/3rdpartylicenses.txt)\n* [prerendered-routes.json](./simple-ssr/prerendered-routes.json)\n\n\nrunning node dist/simple-ssr/server/server.mjs starts the Express server and everything works fine.\nThe problem starts after adding Angular built-in i18n.\nAfer seetting up everything and localizing the app it works okay with ng serve.\nBut building dist version it generates another nested structure:\n# simple-ssr-with-i18n\n\n* [browser/](./my-app/browser)\n * [en-US/](./my-app/browser/en-US)\n * [assets/](./my-app/browser/en-US/assets)\n * [img/](./my-app/browser/en-US/assets/img)\n * [first/](./my-app/browser/en-US/first)\n * [index.html](./my-app/browser/en-US/first/index.html)\n * [home/](./my-app/browser/en-US/home)\n * [index.html](./my-app/browser/en-US/home/index.html)\n * [second/](./my-app/browser/en-US/second)\n * [index.html](./my-app/browser/en-US/second/index.html)\n * [favicon.ico](./my-app/browser/en-US/favicon.ico)\n * [index.html](./my-app/browser/en-US/index.html)\n * [main-VKL3SVOT.js](./my-app/browser/en-US/main-VKL3SVOT.js)\n * [polyfills-LQWQKVKW.js](./my-app/browser/en-US/polyfills-LQWQKVKW.js)\n * [styles-UTKJIBJ7.css](./my-app/browser/en-US/styles-UTKJIBJ7.css)\n * [uk/](./my-app/browser/uk)\n * [assets/](./my-app/browser/uk/assets)\n * [img/](./my-app/browser/uk/assets/img)\n * [first/](./my-app/browser/uk/first)\n * [index.html](./my-app/browser/uk/first/index.html)\n * [home/](./my-app/browser/uk/home)\n * [index.html](./my-app/browser/uk/home/index.html)\n * [second/](./my-app/browser/uk/second)\n * [index.html](./my-app/browser/uk/second/index.html)\n * [favicon.ico](./my-app/browser/uk/favicon.ico)\n * [index.html](./my-app/browser/uk/index.html)\n * [main-VKL3SVOT.js](./my-app/browser/uk/main-VKL3SVOT.js)\n * [polyfills-LQWQKVKW.js](./my-app/browser/uk/polyfills-LQWQKVKW.js)\n * [styles-UTKJIBJ7.css](./my-app/browser/uk/styles-UTKJIBJ7.css)\n* [server/](./my-app/server)\n * [en-US/](./my-app/server/en-US)\n * [index.server.html](./my-app/server/en-US/index.server.html)\n * [main.server.mjs](./my-app/server/en-US/main.server.mjs)\n * [polyfills.server.mjs](./my-app/server/en-US/polyfills.server.mjs)\n * [render-utils.server.mjs](./my-app/server/en-US/render-utils.server.mjs)\n * [server.mjs](./my-app/server/en-US/server.mjs)\n * [uk/](./my-app/server/uk)\n * [index.server.html](./my-app/server/uk/index.server.html)\n * [main.server.mjs](./my-app/server/uk/main.server.mjs)\n * [polyfills.server.mjs](./my-app/server/uk/polyfills.server.mjs)\n * [render-utils.server.mjs](./my-app/server/uk/render-utils.server.mjs)\n * [server.mjs](./my-app/server/uk/server.mjs)\n* [3rdpartylicenses.txt](./my-app/3rdpartylicenses.txt)\n* [prerendered-routes.json](./my-app/prerendered-routes.json)\n\nfolder structure for i18n & ssr\nObviously pathes in server.ts and, as a result, in dist/simple-ssr-with-i18n/server/en-US/server.mjs are not set up right for working correctly with different locale versions.\nAnd as I imagined it should work simply with following changes\n const languageFolder = basename(serverDistFolder);\n const languagePath = `/${languageFolder}/`;\n const browserDistFolder = resolve(\n serverDistFolder,\n '../../browser' + languagePath\n );\n\nand running separate express server instance for each locale. (Ideally one server serving all locales, of course)\nBut all of my attempts were not successful, running node dist/simple-ssr/server/server.mjs leads to unresponsive site with errors fetching static .js file chunks.\nMay somebody provide some comprehensive example for server.ts and setting up i18n+ssr together?\nOnly relible article I found is Angular-universal-and-i18n-working-together\nbut it's outdated, and i get built-time errors on baseHref step.\nP.S. Chatgpt is aware of Angular 15 and Universal, so it's not also very helpfull."} +{"id": "000046", "text": "We are trying to implement deferrable views for a component in angular. This component is present in a component which is used by a parent in another repo. While defer seems to be working when we implemented it inside component of the same project, its not working when imported and used in a library. Two issue here actually:\n\ncode is not split into a new bundle but loaded along with the main library bundle\nplaceholder element appears for a split second and then the view disappears. on checking the html i found that I cannot see the child elements of the deferred component, its just like a dummy element\n\n\nHere are the things which I have followed as a requirement:\n\nUsing angular 17 in both the main project and library project\nUsing on viewport condition to defer the block\nthe component inside the defer block is a standalone component\ncomponent is not used anywhere outside the defer block and also not referenced using viewchild\n\nIs there anything I'm doing wrong or any additional requirement I need to follow ?"} +{"id": "000047", "text": "I have this\n\n {{ dt }}\n\n\nand I want to refactor to angular v17 syntax\n@for (dt of totals; track $index) {\n {{ dt }}\n}\n\nHow do I refactor the [ngClass] on the
?\nI tried this but obviously It doesn't work because the variable dt is not yet defined inthe
\n
\n @for (dt of totals; track $index) {\n {{ dt }}\n }\n
\n\nI could try this but I don't want an extra ng-container on every element:\n
\n @for (dt of totals; track $index) {\n \n {{ dt }}\n \n }\n
\n\nWhat's the official way to do this?\nedit:\nThanks for the help, I got the resolution I needed. The question is silly because I forgot that the *ngFor repeats the element it is in and its children. I forgot that and thought it was only repeating its children. That is basic pre-v17 angular. I'll leave this question in case this catches anyone else."} +{"id": "000048", "text": "Hi i'm new to angular 17, i'm following the course tour of heroes on the website.\nAfter creating the project, i create a new component heroes just like the tutorial tells me to.\nI add the component selector to app.component.html like this :\n\nAs i am trying to serve the project i have this error :\n[ERROR] NG8001: 'app-heroes' is not a known element:\n\nIf 'app-heroes' is an Angular component, then verify that it is included in the '@Component.imports' of this component.\n\nIf 'app-heroes' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@Component.schemas' of this component to suppress this message. [plugin angular-compiler]\nsrc/app/app.component.html:0:0:\n0 \u2502\n\u2575 ^\n\n\nError occurs in the template of component AppComponent.\nsrc/app/app.component.ts:9:15:\n 9 \u2502 templateUrl: './app.component.html',\n\nCan someone help please ??\nI tried to do ng serve --open and want it to serve in my browser but it's not working."} +{"id": "000049", "text": "My Application has some generic pages like the landing or logout page which are navigatable when the user is not logged in. Those shall be rendered normally within the primary router-outlet.\nThen I have Pages that are for logged-in users as the core of the application state and those pages shall be rendered within a general layout component that contains the navigation, footer, header etc.\nI am having trouble to render those children within a named router-outlet that I expect to be within my layout component.\napp.routes.ts\nexport const routes: Routes = [\n{ path: '', redirectTo: 'landing', pathMatch: 'full' },\n{\n path: 'landing',\n component: LandingPageComponent\n},\n{\n path: 'intern',\n component: NavigationComponent,\n children: [\n {\n path: 'enterprise',\n component: OverviewComponent,\n },\n { path: '', redirectTo: 'enterprise', pathMatch: 'full' },\n ]\n},\n\n];\nnavigation.component.html\n
\n\n
\n\nLanding and Navigation component are rendered as expected, but the content of the pages that I want to be within the navigation component in the named router-outled \"intern\" are not there. Sidenote: As I understood - as long as the child route and the named router-outlet share the same name (here 'intern'), i do not need to define the \"outled: 'intern'\" property in the app.routes.ts"} +{"id": "000050", "text": "I want to implement AuthGuard using CanActivateFn in Angular 16 which should prevent to view some pages for unauthorized users.\nisUserLoggedIn method in UserService is a get request to the backend which returns a loggedin user or throws UNAUTHORIZED exception.\nI want to call isUserLoggedIn in AuthGuard and check if it throws UNAUTHORIZED exception then redirect to \"/unauth\" path otherwise it should return true and do nothing.\nThe problem: isUserLoggedIn returns Observable and CanActivateFn needs type boolean. Is there any way to do this without changing the actual GET request in backend ?\n@Injectable({providedIn: 'root'})\nexport class UserService {\n\n isUserLoggedIn(): Observable {\n const headers = new HttpHeaders({'Content-Type': 'application/json'});\n return this.apiHandlerService.get(API_URL, headers);\n }\n}\n\nimport {CanActivateFn, Router,} from '@angular/router';\nimport {inject} from '@angular/core';\nimport {catchError, map, of} from \"rxjs\";\n\nexport const AuthGuard: CanActivateFn = (route, state) => {\n const authService: UserService = inject(UserService);\n const router: Router = inject(Router);\n\n return authService.isUserLoggedIn().pipe(\n map(response => {\n console.log('Response ', response);\n return true;\n }),\n catchError(error => {\n router.navigate(['/unauthorized'])\n return of(false);\n })\n )\n}"} +{"id": "000051", "text": "I am running Angular 17 projects with standalone components.\nI want to add a service where i will start and stop the ngx-ui-loader (\nhttps://www.npmjs.com/package/ngx-ui-loader).\nI can't do normal module where i will provide in root ngxui loader like in older versions, so i am wondering if this can work in Angular17?\nWhen loader is directly called and imported in a standalone component it works.\nCurrent implementation is not working..\nService code:\nimport { Injectable, inject } from '@angular/core';\nimport { NgxUiLoaderService } from 'ngx-ui-loader';\n\n@Injectable({\n providedIn: 'root',\n})\nexport class LoaderService {\n// ngxLoader = inject(NgxUiLoaderService);\n ngxLoader = inject(NgxUiLoaderService);\n\n startLoader() {\n console.error('start');\n console.error(this.ngxLoader);\n this.ngxLoader.start();\n }\n\n stopLoader() {\n this.ngxLoader.stop();\n }\n\n\n loaderConfig =\n {\n \"bgsColor\": \"red\",\n \"bgsOpacity\": 0.5,\n \"bgsPosition\": \"bottom-right\",\n \"bgsSize\": 60,\n \"bgsType\": \"ball-spin-clockwise\",\n \"blur\": 5,\n \"delay\": 0,\n \"fastFadeOut\": true,\n \"fgsColor\": \"red\",\n \"fgsPosition\": \"center-center\",\n \"fgsSize\": 60,\n \"fgsType\": \"ball-spin-clockwise\",\n \"gap\": 24,\n \"logoPosition\": \"center-center\",\n \"logoSize\": 120,\n \"logoUrl\": \"\",\n \"masterLoaderId\": \"master\",\n \"overlayBorderRadius\": \"0\",\n \"overlayColor\": \"rgba(40, 40, 40, 0.8)\",\n \"pbColor\": \"red\",\n \"pbDirection\": \"ltr\",\n \"pbThickness\": 3,\n \"hasProgressBar\": true,\n \"text\": \"\",\n \"textColor\": \"#FFFFFF\",\n \"textPosition\": \"center-center\",\n \"maxTime\": -1,\n \"minTime\": 300\n }\n}\n\nAppComponent.ts\n\n\n`import { Component, inject } from '@angular/core';\nimport { CommonModule } from '@angular/common';\nimport { NavbarComponent } from './components/navbar/navbar.component';\nimport { NgxUiLoaderModule, NgxUiLoaderService } from 'ngx-ui-loader';\nimport { LoginComponent } from './components/login/login.component';\nimport { UserService } from './services/user.service';\nimport { StorageService } from './services/storage.service';\nimport { finalize, first } from 'rxjs';\nimport { LoaderService } from './services/loader.service';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n templateUrl: './app.component.html',\n styleUrl: './app.component.scss',\n imports: [CommonModule, NavbarComponent, NgxUiLoaderModule, LoginComponent],\n})\nexport class AppComponent {\n title = 'CinAppClient';\n userService = inject(UserService);\n storageService = inject(StorageService);\n ngxLoaderService = inject(NgxUiLoaderService);\n loaderService = inject(LoaderService);\n\n user$ = this.userService.user$;\n constructor() {}\n\n ngOnInit() {\n this.loaderService.startLoader();\n let storageUser = this.storageService.getUser();\n if (storageUser !== null) {\n this.userService.authToken = storageUser.token;\n this.userService\n .refreshToken()\n .pipe(\n first(),\n finalize(() => this.loaderService.stopLoader())\n )\n .subscribe();\n }\n }\n}\ntype here\n\nappComponent.html\n\n`\ntype here"} +{"id": "000052", "text": "Angular v17 onwards defaults to the Standalone approach in the CLI, and there's no more explicit use of @NgModule for application organization. My question is regarding lazy loading with the new default configuration.\nSuppose I define my routes in the following way (as im using standalone components):\nexport const routes: Routes = [\n {'path' : '', component: DashboardComponent},\n {'path' : 'users', component: UsersComponent},\n {'path' : '**', component: DashboardComponent},\n];\n\n\nWill lazy loading still work with this new standalone-oriented approach, or is it necessary to adhere to the traditional @NgModule and feature module way to apply the lazy loading concept in Angular v17? I'd appreciate any insights, experiences, or documentation references related to this potential change in behavior."} +{"id": "000053", "text": "I have this code\nthis.items.mutate(products => this.sourceData.getData().forEach(item => products.push(item)));\n\nupdating the library from Angular 16 to Angular 17 I need to remove 'mutate' using 'update' or 'set', but I don't know how to do it.\nI should change products.push(item)-> [...products, item] but I don't know how to do it with the forEach."} +{"id": "000054", "text": "I am encountering an issue with my Angular 17 SSR (Server-Side Rendering) application. I am using ApexCharts/ng-apexcharts, which currently only works on the browser side. The specific error message I'm facing is:\n\nI understand that this error is expected in an SSR environment, and disabling SSR resolves the issue. However, I am looking for a more robust solution. After some research, I came across the afterNextRender() function, which seems promising.\nI am seeking guidance on how to implement afterNextRender() to handle this situation in an Angular 17 SSR application. I believe this could be a valuable workaround, but I'm struggling with the implementation.\nAny help or suggestions would be greatly appreciated. Thank you in advance for your assistance!\nps : the chart is displayed but ReferenceError: window is not defined in the terminal is annoying it occurs everytime i import ApexCharts\nHere's my current code snippet:\nmy dashboard.component.ts :\nimport { AfterRenderPhase, Component, ElementRef, OnInit, ViewChild, afterNextRender } from '@angular/core';\nimport ApexCharts from 'apexcharts';\nimport {\n ApexAxisChartSeries,\n ApexChart,\n ApexDataLabels,\n ApexLegend,\n ApexStroke,\n ApexTitleSubtitle,\n ApexXAxis,\n ApexYAxis,\n} from 'ng-apexcharts';\nexport type ChartOptions = {\n series: ApexAxisChartSeries;\n chart: ApexChart;\n xaxis: ApexXAxis;\n stroke: ApexStroke;\n dataLabels: ApexDataLabels;\n yaxis: ApexYAxis;\n title: ApexTitleSubtitle;\n labels: string[];\n legend: ApexLegend;\n subtitle: ApexTitleSubtitle;\n};\nexport const series = {\n // dummy data from apex docs\n};\n@Component({\n selector: 'app-dashboard',\n standalone: true,\n imports: [],\n templateUrl: './dashboard.component.html',\n styleUrl: './dashboard.component.scss'\n})\nexport class DashboardComponent implements OnInit {\n @ViewChild('chart') chart!: ElementRef;\n public chartOptions!: Partial;\n\n constructor() {\n // im struggling here\n afterNextRender(() => {\n const element = this.chart.nativeElement;\n var chart = new ApexCharts(element, this.chartOptions);\n chart.render()\n }, {phase: AfterRenderPhase.Read});\n }\n \n\n ngOnInit(): void {\n this.chartOptions = {\n series: [\n {\n name: 'STOCK ABC',\n data: series.monthDataSeries1.prices,\n },\n ],\n // chart options config from apex docs\n };\n }\n}\n\n\nin my dashboard.component.html\n
"} +{"id": "000055", "text": "Issue\nI'm working with an Angular v17 app configured in standalone mode, experiencing issues integrating with Keycloak libraries. Specifically, Keycloak isn't automatically appending the authorization header to backend requests. For security reasons, I prefer not to manually handle the Authorization Token.\n\nI installed Keycloak libs \"npm install keycloak-angular\"\nI added a provider for the Keycloak init\nI added some test code to signin and execute a request\n\nAll this code is working well with Angular non standalone (NgModule). But since I switched to standalone in angular 17, something is fishy.\nTo test my code, I have configured an Interceptor: authInterceptorProvider. That is adding the Token manually to each request. Works well. But I don't want to handle tokens by hand...\nWhat might I be missing or configuring wrong?\nCode bits (image upload is not working at the moment)\nHere my simplyfied Application config\n export const initializeKeycloak = (keycloak: KeycloakService) => {\nreturn () =>\n keycloak.init({\n config: {\n url: 'http://localhost:8180/',\n realm: 'balbliblub-realm',\n clientId: 'blabliblubi-public-client',\n },\n initOptions: {\n pkceMethod: 'S256',\n redirectUri: 'http://localhost:4200/dashboard',\n },\n loadUserProfileAtStartUp: false\n });}\n\n\nexport const appConfig: ApplicationConfig = {\nproviders: [provideRouter(routes),\n provideHttpClient(\n withFetch(),\n withXsrfConfiguration(\n {\n cookieName: 'XSRF-TOKEN',\n headerName: 'X-XSRF-TOKEN',\n })\n ),\n\n authInterceptorProvider,\n importProvidersFrom(HttpClientModule, KeycloakBearerInterceptor),\n {\n provide: APP_INITIALIZER,\n useFactory: initializeKeycloak,\n multi: true,\n deps: [KeycloakService],\n },\n KeycloakService,\n]};\n\nHere my AppComponent\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [CommonModule, RouterOutlet],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent implements OnInit {\n title = 'testy';\n public isLoggedIn = false;\n public userProfile: KeycloakProfile | null = null;\n\n constructor(private readonly keycloak: KeycloakService,\n private http: HttpClient) { }\n\n public async ngOnInit() {\n this.isLoggedIn = await this.keycloak.isLoggedIn();\n\n if (this.isLoggedIn) {\n this.userProfile = await this.keycloak.loadUserProfile();\n }\n }\n\n login() {\n this.keycloak.login();\n }\n\n protected loadAbos() {\n this.http.get('http://localhost:8080/api/abos?email=' + this.userProfile?.email, { observe: 'response',withCredentials: true })\n .pipe(\n catchError(err => this.handleError(\"Could not load abos\", err)),\n /// if no error occurs we receive the abos\n tap(abos => {\n console.info(\"loaded abos\", abos);\n })\n ).subscribe()\n }\n\nThanks 4 your help <3"} +{"id": "000056", "text": "Working with Angular15.\nCreate a lazy loading modules\napp-routing.module.ts\n const routes: Routes = [\n { path: '', redirectTo: '/dashboard', patchMatch: 'full' },\n { path: 'dashboard', loadChildren: () => import('./dashboard/dashboard.module').then(m => m.DashboardModule) },\n { path: 'module1', loadChildren: () => import('./module1/module1.module').then(m => m.FirstModule) },\n { path: 'module2', loadChildren: () => import('./module2/module2.module').then(m => m.SecondModule) },\n { path: 'module3', loadChildren: () => import('./module3/module3.module').then(m => m.ThirdModule) },\n { path: 'module4', loadChildren: () => import('./module4/module4.module').then(m => m.FourthModule) },\n ];\n\n@NgModule({\nimports: [RouterModule.forChild(routes, {useHash:true})]\nexports: [RouterModule]\n})\nexport class AppRoutingModule {}\n\ndashboard.module.ts -- Having components dependency with FirstModule, SecondModule, ThirdModule and FourthModule\nconst routes: Routes = [\n { path: 'dashboard', component: DashboardComponent},\n];\n\n@NgModule({\ndeclarations: [DashboardComponent, DashboardDetailsComponent]\nimports: [RouterModule.forChild(routes), FirstModule, SecondModule, ThirdModule, FourthModule]\nexports: [RouterModule],\nbootstrap: [DashboardComponent]\n})\nexport class DashboardModule {}\n\nmodule1.module.ts -- Having components dependency with SecondModule and ThirdModule\nconst routes: Routes = [\n { path: 'module1', component: Module1Component},\n { path: 'module1/details', component: Module1DetailsComponent},\n];\n\n@NgModule({\ndeclarations: [Module1Component, Module1DetailsComponent]\nimports: [RouterModule.forChild(routes), SecondModule, ThirdModule]\nexports: [RouterModule],\nbootstrap: [Module1Component]\n})\nexport class FirstModule {}\n\nmodule2.module.ts -- Having components dependency with ThirdModule\nconst routes: Routes = [\n { path: 'module2', component: Module2Component},\n { path: 'module2/details', component: Module2DetailsComponent},\n];\n\n@NgModule({\ndeclarations: [Module2Component, Module2DetailsComponent]\nimports: [RouterModule.forChild(routes), ThirdModule]\nexports: [RouterModule],\nbootstrap: [Module2Component]\n})\nexport class SecondModule {}\n\nWith the above code getting error\nError: NG04007: The Router was provided more than once. This can happen if 'forRoot' is used outside of the root injector. Lazy load modules hould use RouterModule.forChild instead\nIts working fine only if DashboardModule is explicitly imported in AppRoutingModule"} +{"id": "000057", "text": "I have an Angular service that successfully stores the key and value but fails to retrieve it. Below is the code:\nimport { Injectable } from '@angular/core';\n\n@Injectable({\n providedIn: 'root'\n})\n\nexport class StorageService {\n constructor() {}\n\n // Store the value - (Note: This code works fine. The key is userData and it stores an array of user credentials as value )\n\n async store(storageKey: string, value: any) {\n localStorage.setItem('key', storageKey);\n localStorage.setItem('value', value);\n }\n \n // Get the value - Here I provide the userData as key but I get a null value. If put key in \"key\" then I get an error. I have verified with console.log(key) that accurate key is passed to retrieve the value.\n\n async get(key: string) {\n const ret = localStorage.getItem(key);\n return JSON.parse(ret);\n } \n}"} +{"id": "000058", "text": "I updated my projecto Angular 17.1 and now when i try ng build it creates a server.mjs file in /dist//server directory which is meant to be served for production. when I try to run the file with node or pm2 i get the following error\nTypeError: Nl is not a function\n at $C (file:///Users/goldenfox/Projects/front/dist/karlancer/server/server.mjs:106:5771)\n at GC (file:///Users/goldenfox/Projects/front/dist/karlancer/server/server.mjs:106:6227)\n at file:///Users/goldenfox/Projects/front/dist/karlancer/server/server.mjs:106:6318\n at ModuleJob.run (node:internal/modules/esm/module_job:218:25)\n at async ModuleLoader.import (node:internal/modules/esm/loader:329:24)\n at async loadESM (node:internal/process/esm_loader:28:7)\n at async handleMainPromise (node:internal/modules/run_main:113:12)\n\nNode.js v20.11.0\n\nAny Idea about how I can fix this ?"} +{"id": "000059", "text": "I want to display items of an array of strings from index 1.\narr = [ \"str1\", \"str2\", \"str3\", \"str4\", \"str5\" ]\n\nOutput should be:\nstr2\nstr3\nstr4\nstr5\n\nPrint all except first one, using new @for loop in angular."} +{"id": "000060", "text": "In my Statistics module I got a signal that defines the type of the charts that should appear and this signal is updated using a radio button group.\nThe signal: typeSignal = signal('OIA')\nThe radio buttons that sets the :\n
\n @for (type of types; track $index) {\n \n \n }\n
\n\nHowever, I got another computed signal that creates the charts data according to the type signal. Here's the charts signal:\n charts = computed(() => {\n const chartsArr:ChartData[] = []\n if (this.typeSignal() == \"OIA\") {\n\n chartsArr.push(this.createBarChart(\"Status of Incident\", ['Closed', 'Ongoing'], \"status\", \"Advisories\", true))\n chartsArr.push(this.createBarChart(\"Severity of Incident\", ['Severity 0', 'Severity 1', 'Severity 2', 'Severity 3', 'Severity 4'], \"impacts\", \"Advisories\", false))\n chartsArr.push(this.createDonutChart(\"Communication type\", ['Incident', 'Change'], 300))\n\n } else if (this.typeSignal() == \"Portail de l'information\") {\n\n chartsArr.push(this.createBarChart(\"Status of Incident\", ['Scheduled', 'Archived', 'Ongoing'], \"status\", \"Advisories\", true))\n chartsArr.push(this.createBarChart(\"Impact of Incident\", ['Major', 'Minor', 'Grave'], \"impacts\", \"Advisories\", false))\n chartsArr.push(this.createDonutChart(\"Communication type\", ['Incident', 'Change'], 300))\n\n } else if (this.typeSignal() == \"Bulletin Board\") {\n chartsArr.push(this.createBarChart(\"Status of Change\", ['Closed', 'Ongoing', 'Scheduled'], \"status\", \"Advisories\", true))\n chartsArr.push(this.createBarChart(\"Outage of Incident\", ['Complete Outage', 'Partial Outage', 'Info'], \"impacts\", \"Advisories\", false))\n chartsArr.push(this.createDonutChart(\"Communication type\", ['Info', 'Incident', 'Change'], 300))\n }\n console.log(chartsArr);\n return structuredClone(chartsArr)\n })\n\nand I read this charts signal in my template\n@if ([\"OIA\",\"Portail de l'information\",\"Bulletin Board\"].includes(typeSignal())) {\n
\n @for (chart of charts(); track $index) {\n @if (chart.type == \"bar\") {\n \n }@else if (chart.type==\"donut\") {\n \n }\n }\n\n
\n}\n\nThe problem here is that the charts signal doesn't update the for loop although the console.log(chartsArr); inside it gets logged whenever I toggle the radio buttons."} +{"id": "000061", "text": "I'm trying to create a dynamic form array in Angular 17 with a child component handling part of the input. However, I'm encountering an error:\n\nType 'AbstractControl' is missing the following properties from type 'FormGroup': controls, registerControl, addControl, removeControl, and 2 more.\n\nHere's my code:\nParent component:\n@Component({\n selector: 'app',\n template: `\n
\n
\n
\n\n \n \n \n\n \n \n \n
\n
\n \n \n
\n\n{{this.categoryForm.value | json}}\n `,\n changeDetection: ChangeDetectionStrategy.OnPush,\n standalone: true,\n imports: [ReactiveFormsModule, JsonPipe, NgFor, CategoryComponent],\n})\nexport class AppComponent {\n categoryForm!: FormGroup;\n\n constructor(private fb: FormBuilder) {}\n\n ngOnInit(): void {\n this.categoryForm = this.fb.group({\n categories: this.fb.array([]),\n });\n }\n\n get categories(): FormArray {\n return this.categoryForm.get('categories') as FormArray;\n }\n\n addCategory() {\n this.categories.push(\n this.fb.group({\n categoryID: '',\n categoryName: '',\n sections: this.fb.array([]),\n })\n );\n }\n\n removeCategory(catIndex: number) {\n this.categories.removeAt(catIndex);\n }\n\n onSubmit() {\n console.log(this.categoryForm.value);\n }\n}\n\nChild component:\n@Component({\n selector: 'app-category',\n template: `\n
\n
\n

Category : {{index+1}}

\n Category ID :\n \n Category Name:\n \n\n \n
\n
\n `,\n changeDetection: ChangeDetectionStrategy.OnPush,\n standalone: true,\n imports: [ReactiveFormsModule, NgFor],\n})\nexport class CategoryComponent {\n @Input() categories!: FormArray;\n @Input() formGroup!: FormGroup;\n @Input() index!: number;\n\n removeCategory() {\n this.categories.removeAt(this.index);\n }\n}\n\nI've created a form array in the parent component and tried to loop through it using *ngFor, passing each form group to the child component. The child component receives the form group via @Input() and handles part of the input fields.\nCan someone help me understand why this error is occurring and how to resolve it? Thank you!"} +{"id": "000062", "text": "I migrated to angular 17.1. When I ran the app, I noticed error information about using fetch with httpClient for SSR.\nNG02801: Angular detected that `HttpClient` is not configured to use `fetch` APIs. It's strongly recommended to enable `fetch` for applications that use Server-Side Rendering for better performance and compatibility. To enable `fetch`, add the `withFetch()` to the `provideHttpClient()` call at the root of the application.\n\nApplication is not bootstrap by standalone component but I found only tutorials for that kind of apps. I put provideHttpClient(withFetch()), to app.server.module.ts and error disappeared.\nIm not sure if it is a correct solution for application bootstraped by AppModule. Do you have better options?"} +{"id": "000063", "text": "I have this new angular signal variables on my component:\nprivate employees: Signal = this.employeesService.filteredEmployeesSignal;\npublic employeesDataSource = computed(\n () => new MatTableDataSource(this.employees())\n);\n\nAll works fine until tests fails with this error:\nthis.employees is not a function\nwhen tries to access to this.employees() signal value. It detects as a function instead as a signal value getter.\nI tried to get the value in other local constant inside the computed signal but the error persist."} +{"id": "000064", "text": "As you know in angular 17 we can have different syntax for ngIf, ngFor and so on. Am looking for an efficient way of migrating old syntax in html files to the new on presented in Angular 17:\nFor example I had this old html in angular 15:\n\n
\n
\n
\n
\n\n\n
\n\n\nAnd need it in the new syntax like this:\n@if (!dynamicWidth) {\n
\n
\n
\n} @else { \n @for (item of count | numberRange; track item; let i = $index) {\n
.
\n } \n}"} +{"id": "000065", "text": "How can I define the index variable in @for in Angular 17\nconst users = [\n { id: 1, name: 'Ali' },\n { id: 2, name: 'reza' },\n { id: 3, name: 'jack' },\n ];\n\n
    \n @for (user of users; track user.id; let i = index) {\n
  • {{ user.name + i }}
  • \n } @empty {\n Empty list of users\n }\n
\n\nindex is not known as we had in *ngFor and got Unknown \"let\" parameter variable \"index\" in Angular17 @for\nBut the following is working:\n
    \n
  • {{ user.name + i }}
  • \n
"} +{"id": "000066", "text": "I am utilizing Swiper in several of my components, and I've encountered an issue when Angular routing changes especially routeParams eg. /route/:id \u2013 it doesn't function correctly. To address this, I implemented ngZone. This resolved the main Swiper functionality, but the thumbs Swiper is still behaving unexpectedly. I believe that rather than initializing twice, it would be more efficient to initialize once with both the main Swiper and thumb Swiper details. However, I am unsure about the syntax. Can someone please assist me with this?\nNote: The objective is to ensure that Swiper works seamlessly when navigating between different routes.\nhtml:\n
\n \n \n \n \n \n\n \n \n \n \n \n
\n\ncss:\nswiper-slide {\n text-align: center;\n font-size: 18px;\n background: #fff;\n display: flex;\n justify-content: center;\n align-items: center;\n background-size: cover;\n background-position: center;\n img {\n display: block;\n width: 100%;\n height: 100%;\n object-fit: cover;\n }\n}\n\n.mySwiper {\n height: 600px;\n width: 100%;\n}\n\n.mySwiper2 {\n // defines height and width of thumbs swiper\n height: 100px;\n width: 100%;\n box-sizing: border-box;\n padding: 10px 0;\n swiper-slide {\n opacity: 0.6; // this set default opacity to all slides\n }\n .swiper-slide-thumb-active {\n opacity: 1; // this reset the opacity one for the active slide\n }\n}\n\nts\nimport { CommonModule } from '@angular/common';\nimport {\n Component,\n OnInit,\n CUSTOM_ELEMENTS_SCHEMA,\n ViewChild,\n ElementRef,\n AfterViewInit,\n NgZone,\n} from '@angular/core';\nimport { ActivatedRoute } from '@angular/router';\n\n\n@Component({\n selector: 'app-swiper-thumbs-vertical-dup-1',\n standalone: true,\n imports: [CommonModule],\n templateUrl: './swiper-thumbs-vertical-dup-1.component.html',\n styleUrl: './swiper-thumbs-vertical-dup-1.component.scss',\n schemas: [CUSTOM_ELEMENTS_SCHEMA]\n})\nexport class SwiperThumbsVerticalDup1Component implements OnInit, AfterViewInit {\n @ViewChild('swiper1') swiper1!: ElementRef;\n @ViewChild('swiper2') swiper2!: ElementRef;\n\n slides: any[] = []\n\n constructor(private zone: NgZone, private route: ActivatedRoute) { }\n\n ngOnInit() {\n this.routeSub = this.route.params.subscribe(params => {\n const id = params['id'];\n if (id == 1) {\n this.slides = [\n { image: 'https://swiperjs.com/demos/images/nature-1.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-2.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-3.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-4.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-5.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-6.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-7.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-8.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-10.jpg' },\n ];\n } else {\n this.slides = [\n { image: 'https://swiperjs.com/demos/images/nature-6.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-7.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-8.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-10.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-1.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-2.jpg' },\n { image: 'https://swiperjs.com/demos/images/nature-3.jpg' },\n ];\n }\n });\n }\n\n ngAfterViewInit() {\n this.zone.runOutsideAngular(() => {\n const swiperParams = {\n breakpoints: {\n 100: {\n slidesPerView: 3,\n },\n 640: {\n slidesPerView: 5,\n },\n 1024: {\n slidesPerView: 6,\n },\n },\n };\n\n const swiperParams1 = {\n spaceBetween: 10\n };\n\n Object.assign(this.swiper2.nativeElement, swiperParams1);\n this.swiper1.nativeElement.initialize();\n\n // now we need to assign all parameters to Swiper element\n Object.assign(this.swiper2.nativeElement, swiperParams);\n this.swiper2.nativeElement.initialize();\n });\n }\n}"} +{"id": "000067", "text": "I am trying to implement a image-slider.\nI have a image-slider componenent and a component in which i want to display the image slider\nI'd like to automatically slide through the images.\nimage-slider-component.html:\n
\n
\n \n @if (indicatorsVisible) {\n
\n @for(slide of slides; track $index; ){\n
\n }\n
\n }\n \n \n \n \n\nimage-slider-component.ts\nimport { Component, Input, OnInit } from '@angular/core';\nimport { CommonModule } from '@angular/common';\n\n\n\n@Component({\n selector: 'app-image-slider',\n standalone: true,\n imports: [CommonModule],\n templateUrl: './image-slider.component.html',\n styleUrl: './image-slider.component.scss',\n})\nexport class ImageSliderComponent implements OnInit{\n\n @Input() slides: any[] = [];\n @Input() indicatorsVisible = true;\n @Input() animationSpeed = 500;\n @Input() autoPlay = true;\n @Input() autoPlaySpeed = 3000;\n currentSlide = 0;\n hidden = false;\n\n next() {\n let currentSlide = (this.currentSlide + 1) % this.slides.length;\n this.jumpToSlide(currentSlide);\n }\n\n previous() {\n let currentSlide =\n (this.currentSlide - 1 + this.slides.length) % this.slides.length;\n this.jumpToSlide(currentSlide);\n }\n\n jumpToSlide(index: number) {\n this.hidden = true;\n setTimeout(() => {\n this.currentSlide = index;\n this.hidden = false;\n }, this.animationSpeed);\n }\n\n ngOnInit() {\n if (this.autoPlay) {\n setInterval(() => {\n this.next();\n }, this.autoPlaySpeed);\n }\n }\n\n}\n\n\nmore.component.html (i'd like to display the img-slider here)\n\n \n\n \n\n\nmore.component.ts (here are the imgs that are used by the img-slider)\nimport { CommonModule } from '@angular/common';\nimport { Component } from '@angular/core';\nimport { FormsModule, ReactiveFormsModule } from '@angular/forms';\nimport { RouterOutlet } from '@angular/router';\nimport { ImageSliderComponent } from '../about/components/image-slider/image-slider.component';\n\n@Component({\n selector: 'app-more',\n standalone: true,\n imports: [ CommonModule,\n RouterOutlet,\n FormsModule,\n ReactiveFormsModule,\n ImageSliderComponent,],\n templateUrl: './more.component.html',\n styleUrl: './more.component.scss'\n})\nexport class MoreComponent {\n \n slides: any[] = [\n {\n url: 'https://html.com/wp-content/uploads/flamingo.webp',\n title: 'First Slide',\n description: 'test1',\n },\n {\n url: 'https://images.unsplash.com/photo-1542831371-29b0f74f9713?w=500&auto=format&fit=crop&q=60&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxzZWFyY2h8Mnx8aHRtbHxlbnwwfHwwfHx8MA%3D%3D',\n title: 'Second Slide',\n description: 'test2',\n }\n ];\n\n \n}\n\n\nThe page doesnt load:\nenter image description here\nIf i remove the OnInit function aswell as the OnInit interface, the page is loading and i can switch through the images by the buttons i implemented, but the autoplay isn't working since I am not using the OnInit Function"} +{"id": "000068", "text": "I've been trying to add \"toastr\" to my Angular17 project but injecting it into my components does not work. I added it using AngularCLI.\nI'm getting the next error:\nERROR Error [NullInjectorError]: R3InjectorError(Standalone[_PDSLoginComponent])[InjectionToken ToastConfig -> InjectionToken ToastConfig -> InjectionToken ToastCo\nnfig -> InjectionToken ToastConfig]:\nNullInjectorError: No provider for InjectionToken ToastConfig!\nHere's what my code contains:\nimport { Component, Output, EventEmitter, Inject } from '@angular/core';\nimport { Router } from '@angular/router';\nimport { FormsModule } from '@angular/forms';\nimport { ToastrService, ToastNoAnimation } from 'ngx-toastr';\n\n@Component({\n selector: 'app-pdslogin',\n standalone: true,\n imports: [\n FormsModule\n ],\n providers: [\n { provide: ToastrService, useClass: ToastrService },\n { provide: ToastNoAnimation, useClass: ToastNoAnimation }\n ],\n templateUrl: './pdslogin.component.html',\n styleUrls: ['./pdslogin.component.css']\n})\nexport class PDSLoginComponent {\n loginData = {\n UserId: ''\n };\n @Output() loginEvent = new EventEmitter();\n onLogin() {\n this.loginEvent.emit(this.loginData);\n this.toastr.info(\"SHOWING TOASTR!!!\",\"Info\");\n }\n constructor(private router: Router, private toastr: ToastrService) { }\n}\n\n\nI have tried searching for a solution in forums and questions here in Stack Overflow but all of them are for previous versions of Angular, now Angular17 uses the property \"standalone\", so then it requires to import and inject right to the component.\n\nSomething I tried is adding it as provider in my 'main.ts' file but as well, didn't work."} +{"id": "000069", "text": "The goal is to create a web components using angular and use it in an external html file.\nI created the web component : 'my-web-component'\n(async () => {\n const app: ApplicationRef = await createApplication(appConfig);\n\n // Define Web Components\n const MyComponent = createCustomElement(MyComponentComponent, { injector: app.injector });\n customElements.define('my-web-component', MyComponent);\n})();\n\nI served(ng serve) and the Custom element works!\nSo, I build my project getting 3 files : main.js, polyfills.js and index.html (/dist).\nI tested the dist/index.html with Live Server and the Custom element still works.\n\n\n\n \n WebComponentClean\n \n\n\n \n \n \n \n\n\n\n\nNow there is the problem\nI created a second index2.html in this project, but out of src folder, and I tested my web component:\n\n\n\n \n \n Document\n \n\n\n \n \n \n \n\n"} +{"id": "000070", "text": "I have an Angular 17 application that uses server-side rendering. The state of the application is managed using ngrx.\nWhen I access the page, I can see that the page comes pre-rendered (by viewing the source of the page, for example), but once the page loads, Angular seems to start from scratch.\nFor example, I have some labels that are obtained through an HTTP request. Initially I see the labels in the page, but then Angular starts and clears the state, rendering the labels blank. It then performs the HTTP request to fetch those labels and displays them again. The end-result is correct but there is a flicker when Angular takes over. From what I read it should be reusing the state from the server.\nOn the browser console, I can see the following:\nAngular hydrated 12 component(s) and 98 node(s), 0 component(s) were skipped. Learn more at https://angular.io/guide/hydration.\n\nThis seems to indicate the Angular was able to hydrate all my components. I have the Redux dev tools, and by checking them it seems that it is the store that is not being hydrated, as the initial state corresponds to the default state of the store while I would expect it to start from the state that came from the server.\nWhat do I need to do to preserve the state of the store?"} +{"id": "000071", "text": "I've tried to implement a reactive form with formArray in angular17. I encountered this issue when you removed one item from the formArray from the top or from the middle.\nhere's the stackbiz for the issue reproduction.\nreproduction path:\n\nput some values in the film fields.\nclick the add film button to add another film to the form.\nput some other values in the newly added film fields.\nclick on the remove film button under the first film fields.\nobserve the values in the film fields, the removed film field values are still showing on the dom. but in the below json where I display the form values are updated correctly.\n\nI try to use ApplicationRef.tick(), ChangeDetectorRef.detectChanges(), and updateValueAndValidity() as other similar issues suggested, but no luck. try to use trackBy, but it doesn't do the trick also. Can someone tell me what I'm doing wrong here?\nimport { CommonModule } from '@angular/common';\nimport { ApplicationRef, ChangeDetectorRef, Component } from '@angular/core';\nimport {\n FormArray,\n FormBuilder,\n FormControl,\n FormGroup,\n ReactiveFormsModule,\n} from '@angular/forms';\n\n@Component({\n selector: 'app-add-vehicle',\n standalone: true,\n imports: [CommonModule, ReactiveFormsModule],\n templateUrl: './add-vehicle.component.html',\n styleUrl: './add-vehicle.component.scss',\n})\nexport class AddVehicleComponent {\n constructor(\n private fb: FormBuilder,\n private appRef: ApplicationRef,\n private cdr: ChangeDetectorRef\n ) {}\n\n addVehicleForm = this.fb.group({\n make: [''],\n model: [''],\n year: [''],\n films: this.fb.array([this.createFilmFormGroup()]),\n });\n\n createFilmFormGroup(): FormGroup {\n return this.fb.group({\n title: [''],\n releaseDate: [''],\n url: [''],\n });\n }\n\n get films(): FormArray {\n return this.addVehicleForm.get('films') as FormArray;\n }\n\n addFilm() {\n this.films.push(this.createFilmFormGroup());\n }\n\n removeFilm(index: number) {\n this.films.removeAt(index);\n\n this.addVehicleForm.reset(this.addVehicleForm.value);\n\n // not working\n // this.addVehicleForm.updateValueAndValidity();\n\n // not working\n // this.appRef.tick();\n\n //not woorking\n // this.cdr.detectChanges();\n }\n}\n\n
\n

Add Vehicle

\n
\n
\n \n \n
\n
\n \n \n
\n
\n \n \n
\n\n
\n
\n @for (film of films.controls; track $index) {\n \n \n
\n \n \n
\n
\n \n \n
\n
\n \n \n
\n \n
\n }\n
\n \n
\n \n\n
\n  {{ addVehicleForm.value | json }}\n\n"}
+{"id": "000072", "text": "Icons don't work in Angular.\nIn the component.ts file:\nimport { MatIconModule } from '@angular/material/icon';\n\n@Component({\n  selector: 'app-dashboard-space',\n  standalone: true,\n  imports: [\n    MatIconModule,\n  ],\n\nAnd in the HTML file:\n
\n grade\n
\n\nand what I see is the text: \"gra\"\nOf course, I ran npm install --save @angular/material.\nBut the console is empty.\nDo you have any idea?"} +{"id": "000073", "text": "I'm trying to add a table in my Angular app from Angular Material@17.2.1 (as shown here).\nI copy / pasted the code from the Angular Material official documentation but I still get the error:\n\nCan't bind to 'dataSource' since it isn't a known property of 'table'\n\nI tried the following:\nReplacing table tag with mat-table, resulting in\n\nmat-table is not a known element\n\nDeleting the brackets in [dataSource]=\"dataSource\" as dataSource=\"dataSource\", resulting in (Webdeveloper console error this time, this could be the solution but I don't know how to fix it even after some searches)\n\nCan't bind to 'matHeaderRowDef' since it isn't a known property of 'tr' (used in the '_UsersComponent' component template).\n\nimporting MatTableModule from '@angular/material' instead of '@angular/material/table' in app.module.ts, resulting in\n\n@angular/material has no exp\u00f4rted member MatTableModule\n\nAdd import {CdkTableModule} from '@angular/cdk/table'; and importing it, changing nothing.\nThe MatTableModule is declared and imported in app.module.ts, where the UsersComponent using it is also declared and imported.\nHere are all versions used in the project:\n@angular-devkit/architect 0.1702.2\n@angular-devkit/build-angular 17.2.2\n@angular-devkit/core 17.2.2\n@angular-devkit/schematics 17.2.2\n@angular/cdk 17.2.1\n@angular/cli 17.2.2\n@angular/material 17.2.1\n@schematics/angular 17.2.2\nrxjs 7.8.1\ntypescript 5.3.3\nzone.js 0.14.4\nHere is my app.module.ts file's content:\nimport { NgModule } from '@angular/core';\nimport { AppRoutingModule } from './app-routing.module';\nimport { BrowserModule } from '@angular/platform-browser';\n\nimport { MatListModule } from '@angular/material/list';\nimport { MatToolbarModule } from '@angular/material/toolbar';\nimport { MatSidenavModule } from '@angular/material/sidenav';\n\nimport { AppComponent } from './app.component';\nimport { TopbarComponent } from './topbar/topbar.component';\nimport { LeftmenuComponent } from './leftmenu/leftmenu.component';\nimport { NavleftbarComponent } from './navleftbar/navleftbar.component';\n\nimport { MatIconModule } from '@angular/material/icon';\nimport {MatTableModule} from '@angular/material/table';\nimport { MatButtonModule } from '@angular/material/button';\nimport {CdkTableModule} from '@angular/cdk/table';\n\nimport { provideAnimationsAsync } from '@angular/platform-browser/animations/async';\nimport { UsersComponent } from './users/users.component';\n\n@NgModule({\n declarations: [\n AppComponent,\n TopbarComponent,\n LeftmenuComponent,\n NavleftbarComponent\n ],\n imports: [\n BrowserModule,\n AppRoutingModule,\n MatToolbarModule,\n MatSidenavModule,\n MatListModule,\n MatButtonModule,\n MatIconModule,\n MatTableModule,\n UsersComponent,\n CdkTableModule\n ],\n providers: [\n provideAnimationsAsync()\n ],\n bootstrap: [AppComponent]\n})\nexport class AppModule { }\n\nHere's the users.component.ts's file content:\nimport { Component } from '@angular/core';\nimport { MatTableModule } from '@angular/material/table';\n\nexport interface UserElement {\n firstname: string;\n lastName: string;\n email: string;\n microsoftEmail: string;\n class: string;\n}\n\nconst ELEMENT_DATA: UserElement[] = [{\n firstname: \"Pierrick\",\n lastName: \"MARTELLIERE\",\n email: \"p.martelliere@gmail.com\",\n microsoftEmail: \"pierrick@coxidev.com\",\n class: \"GPME24\"\n}];\n\n@Component({\n selector: 'app-users',\n templateUrl: './users.component.html',\n styleUrl: './users.component.css',\n standalone: true,\n})\n\nexport class UsersComponent {\n displayedColumns: string[] = ['firstName', 'lastName', 'email', 'microsoftEmail', 'class'];\n dataSource = ELEMENT_DATA;\n clickedRows = new Set();\n}\n\nAnd the users.component.html:\n\n \n \n \n \n\n \n \n \n \n\n \n \n \n \n\n \n \n \n \n\n \n \n \n \n\n \n \n
Pr\u00e9nom {{element.position}} Name {{element.name}} Email {{element.weight}} Compte Microsoft {{element.symbol}} Classe {{element.symbol}}
\n\n\n\n\n\nI really don't see what I am missing, some help would be welcome."} +{"id": "000074", "text": "My application runs fine. My application is a standalone component app. When I try to test the component, I get the error mentioned in the title in Karma.\nImports of my component\nimport { Component } from '@angular/core';\nimport {MatSelectModule} from '@angular/material/select';\nimport {MatFormFieldModule} from '@angular/material/form-field';\nimport { NavigationComponent } from '../../../0_navigation/navigation.component';\nimport jsonDataShipments from '../../../../../assets/shipments.json';\n\n@Component({\n selector: 'app-sort-functionality',\n standalone: true,\n imports: [NavigationComponent, MatFormFieldModule, MatSelectModule],\n templateUrl: './sort-functionality.component.html',\n styleUrl: './sort-functionality.component.css'\n})\n\nConfig of my app\nimport { ApplicationConfig } from '@angular/core';\nimport { provideRouter } from '@angular/router';\n\nimport { routes } from './app.routes';\nimport { provideHttpClient } from '@angular/common/http';\nimport { provideAnimationsAsync } from '@angular/platform-browser/animations/async';\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideRouter(routes),\n provideHttpClient(), provideAnimationsAsync(),\n ],\n};\n\nError\nError: NG05105: Unexpected synthetic property @transitionMessages found. Please make sure that:\n - Either `BrowserAnimationsModule` or `NoopAnimationsModule` are imported in your application.\n - There is corresponding configuration for the animation named `@transitionMessages` defined in the `animations` field of the `@Component` decorator (see https://angular.io/api/core/Component#animations).\n\nIf I import BrowserAnimationsModule\nI get this error\nUncaught (in promise): Error: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead.\nError: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead."} +{"id": "000075", "text": "upgrading from angular 14 to 17 I encountered issue with ng test. It is not working anymore.\nError:\nPS C:\\ForAngular17\\src\\ng\\cat-ng> ng test\nOne or more browsers which are configured in the project's Browserslist configuration will be ignored as ES5 output is not supported by the Angular CLI.\nIgnored browsers: ie 11, ie 10, ie 9, kaios 2.5, op_mini all\n\u2714 Browser application bundle generation complete.\n##teamcity[blockOpened name='JavaScript Unit Tests' flowId='']\n\nError: error TS2688: Cannot find type definition file for '@angular/localize'.\n The file is in the program because:\n Entry point of type library '@angular/localize' specified in compilerOptions\n\n\n\n 12 03 2024 09:26:26.230:INFO [karma-server]: Karma v6.4.3 server started at http://localhost:9876/\n 12 03 2024 09:26:26.232:INFO [launcher]: Launching browsers ChromeHeadlessNoSandbox with concurrency unlimited\n 12 03 2024 09:26:26.233:ERROR [karma-server]: Error: Found 1 load error\n at Server. (C:\\ForAngular17\\src\\ng\\cat-ng\\node_modules\\karma\\lib\\server.js:243:26)\n at Object.onceWrapper (node:events:631:28)\n at Server.emit (node:events:529:35)\n at emitListeningNT (node:net:1851:10)\n at process.processTicksAndRejections (node:internal/process/task_queues:81:21)\n\ntsconfig.spec.json code:\n{\n \"extends\": \"../tsconfig.json\",\n \"compilerOptions\": {\n \"outDir\": \"../out-tsc/spec\",\n \"types\": [\n \"jasmine\",\n \"node\",\n \"@angular/localize\"\n ]\n },\n \"files\": [\n \"test.ts\",\n \"polyfills.ts\"\n ],\n \"include\": [\n \"**/*.spec.ts\",\n \"**/*.d.ts\"\n ]\n}\n\npackage.json code\n {\n \"name\": \"cat-ng\",\n \"version\": \"0.0.0\",\n \"scripts\": {\n \"ng\": \"ng\",\n \"start\": \"ng serve\",\n \"build\": \"ng build\",\n \"build-prod\": \"ng build --configuration production\",\n \"build-azure\": \"ng build --azure\",\n \"test\": \"ng test\",\n \"lint\": \"ng lint\",\n \"e2e\": \"ng e2e\"\n },\n \"private\": true,\n \"dependencies\": {\n \"@ag-grid-community/angular\": \"^28.0.0\",\n \"@ag-grid-community/client-side-row-model\": \"^28.0.2\",\n \"@ag-grid-community/core\": \"^28.0.2\",\n \"@ag-grid-enterprise/clipboard\": \"^28.0.2\",\n \"@ag-grid-enterprise/column-tool-panel\": \"^28.0.2\",\n \"@ag-grid-enterprise/excel-export\": \"^28.0.2\",\n \"@ag-grid-enterprise/menu\": \"^28.0.2\",\n \"@ag-grid-enterprise/range-selection\": \"^28.0.2\",\n \"@ag-grid-enterprise/server-side-row-model\": \"^28.0.2\",\n \"@ag-grid-enterprise/set-filter\": \"^28.0.2\",\n \"@angular/animations\": \"^17.2.4\",\n \"@angular/cdk\": \"^17.2.2\",\n \"@angular/common\": \"^17.2.4\",\n \"@angular/compiler\": \"^17.2.4\",\n \"@angular/core\": \"^17.2.4\",\n \"@angular/forms\": \"^17.2.4\",\n \"@angular/localize\": \"^17.2.4\",\n \"@angular/material\": \"^17.2.2\",\n \"@angular/material-moment-adapter\": \"^17.2.2\",\n \"@angular/platform-browser\": \"^17.2.4\",\n \"@angular/platform-browser-dynamic\": \"^17.2.4\",\n \"@angular/platform-server\": \"^17.2.4\",\n \"@angular/router\": \"^17.2.4\",\n \"@azure/msal-angular\": \"^3.0.13\",\n \"@azure/msal-browser\": \"^3.10.0\",\n \"@fortawesome/angular-fontawesome\": \"^0.7.0\",\n \"@fortawesome/fontawesome-free\": \"^5.12.1\",\n \"@fortawesome/fontawesome-svg-core\": \"^1.2.21\",\n \"@microsoft/applicationinsights-web\": \"^2.8.4\",\n \"@ng-bootstrap/ng-bootstrap\": \"^13.0.0\",\n \"@ngrx/data\": \"^17.1.1\",\n \"@ngrx/effects\": \"^17.1.1\",\n \"@ngrx/entity\": \"^17.1.1\",\n \"@ngrx/router-store\": \"^17.1.1\",\n \"@ngrx/schematics\": \"^17.1.1\",\n \"@ngrx/store\": \"^17.1.1\",\n \"@ngrx/store-devtools\": \"^17.1.1\",\n \"@ngtools/webpack\": \"^17.2.3\",\n \"@popperjs/core\": \"^2.11.5\",\n \"angular2-text-mask\": \"^9.0.0\",\n \"bignumber.js\": \"^9.1.1\",\n \"bootstrap\": \"^5.3.3\",\n \"bootstrap-scss\": \"^5.3.3\",\n \"core-js\": \"^2.6.9\",\n \"date-input-polyfill\": \"^2.14.0\",\n \"file-saver\": \"^2.0.5\",\n \"flag-icon-css\": \"^3.2.0\",\n \"font-awesome\": \"^4.7.0\",\n \"fortawesome\": \"^0.0.1-security\",\n \"jquery\": \"^3.7.1\",\n \"karma-firefox-launcher\": \"^2.1.3\",\n \"mathjs\": \"^10.6.4\",\n \"moment\": \"^2.29.3\",\n \"ng-bootstrap\": \"^1.6.3\",\n \"ng-click-outside2\": \"^14.0.1\",\n \"ng-sidebar\": \"^8.1.1\",\n \"ngx-angular-query-builder\": \"^17.0.0\",\n \"ngx-toastr\": \"^18.0.0\",\n \"popper.js\": \"^1.16.1\",\n \"rxjs\": \"^7.8.1\",\n \"rxjs-compat\": \"^6.5.3\",\n \"subsink\": \"^1.0.1\",\n \"tether\": \"^1.4.7\",\n \"url-search-params-polyfill\": \"^5.0.0\",\n \"zone.js\": \"~0.14.4\"\n },\n \"devDependencies\": {\n \"@angular-devkit/build-angular\": \"^17.2.3\",\n \"@angular/cli\": \"^17.2.3\",\n \"@angular/compiler-cli\": \"^17.2.4\",\n \"@angular/language-service\": \"^17.2.4\",\n \"@types/jasmine\": \"~3.6.0\",\n \"@types/jasminewd2\": \"~2.0.4\",\n \"@types/jquery\": \"^3.5.14\",\n \"@types/node\": \"^10.14.14\",\n \"codelyzer\": \"^6.0.0\",\n \"jasmine-core\": \"^5.1.2\",\n \"jasmine-spec-reporter\": \"~7.0.0\",\n \"karma\": \"^6.4.3\",\n \"karma-chrome-launcher\": \"^3.2.0\",\n \"karma-coverage-istanbul-reporter\": \"^3.0.3\",\n \"karma-jasmine\": \"^5.1.0\",\n \"karma-jasmine-html-reporter\": \"^2.0.0\",\n \"karma-junit-reporter\": \"^2.0.1\",\n \"karma-teamcity-reporter\": \"^1.1.0\",\n \"protractor\": \"^7.0.0\",\n \"puppeteer\": \"^1.19.0\",\n \"ts-node\": \"^7.0.1\",\n \"tslint\": \"^6.1.3\",\n \"typescript\": \"5.3.3\",\n \"webpack-bundle-analyzer\": \"^3.4.1\"\n }\n}\n\nWhat is the possible here? Thank you!"} +{"id": "000076", "text": "I have a component (see: it is a standalone one):\n@Component({\n standalone: true, // <--- See here \n selector: \"app-login\",\n imports: [FormsModule, CommonModule],\n templateUrl: \"./login.component.html\",\n styleUrl: \"./login.component.css\"\n})\nexport class LoginComponent {\n constructor(private authService: AuthService) {}\n}\n\nAnd the service is (see, it requires HttpClient to be injected):\nimport { HttpClient } from '@angular/common/http';\n\n@Injectable({\n providedIn: 'root',\n})\nexport default class AuthService {\n constructor(private http: HttpClient) {} // <--- See here: if I remove this httpClient, it works.\n}\n\nIt does not works:\nERROR NullInjectorError: R3InjectorError(Standalone[e])[e -> e -> e -> e]: \n NullInjectorError: No provider for e!\n\nIf I remove the httpClient from the constructor of the service, it works (but it does nothing). It seem to me that the injection of the HttpClient inside the service is not working.\nAny clue?\nVersion: Angular 17\nPS: lot's of details removed :-)"} +{"id": "000077", "text": "Clicking on the second button in a @for-list changes the state to true of the 2nd and 3rd buttons when using it as a component. When using the button from the component directly, it works correctly: the state of the 2nd button is true, while the state of the 3rd button is still false. Why is that?\nStackblitz\napp.component.ts:\n@Component({\n selector: 'app-root',\n standalone: true,\n template: `\n Clicking on second button changes state of 2nd and 3rd button when using as component

\n When using the button component directly, it works correctly

\n undone:\n @for (todo of undoneTodos; track $index) {\n \n \n }\n\n done:\n @for (todo of doneTodos; track $index) {\n \n }\n `,\n imports: [SliderComponent]\n})\nexport class App {\n undoneTodos: Todo[] = [];\n doneTodos: Todo[] = [];\n\n private todos: Todo[] = [\n {id: 1, done: false },\n {id: 2, done: false },\n {id: 3, done: false },\n ]\n\n constructor() {\n this.buildTodos();\n }\n\n toggle(todo: Todo) {\n todo.done = !todo.done\n\n this.buildTodos();\n }\n\n private buildTodos() {\n this.doneTodos = this.todos.filter(x => x.done);\n this.undoneTodos = this.todos.filter(x => !x.done);\n }\n}\n\nexport interface Todo {\n id?: number;\n done: boolean;\n}\n\nslider.component.ts:\nimport { Component, EventEmitter, Input, Output } from \"@angular/core\";\n\n@Component({\n selector: 'app-slider',\n standalone: true,\n template: '',\n})\nexport class SliderComponent {\n @Input() active: boolean = false;\n @Output() activeChange: EventEmitter = new EventEmitter();\n \n onClick(event: Event) {\n event.stopImmediatePropagation();\n \n this.active = !this.active;\n this.activeChange.emit(this.active);\n }\n}"} +{"id": "000078", "text": "I have a timer for 10 minutes and I need to show it in my angular app that has many routing links (pages). I know that angular is one page site but you can browse to different areas by using routing. Since this is true, how to show the timer in the main app component for all the routing links without resetting the timer everytime i go to difffent routing link?\ntimer code:\ntypescript file:\ntimer(minute) {\n // let minute = 1;\n let seconds: number = minute * 60;\n let textSec: any = \"0\";\n let statSec: number = 60;\n\nconst prefix = minute < 10 ? \"0\" : \"\";\n\nconst timer = setInterval(() => {\n seconds--;\n if (statSec != 0) statSec--;\n else statSec = 59;\n\n if (statSec < 10) {\n textSec = \"0\" + statSec;\n } else textSec = statSec;\n\n this.display = `${prefix}${Math.floor(seconds / 60)}:${textSec}`;\n\n if (seconds == 0) {\n console.log(\"finished\");\n clearInterval(timer);\n }\n}, 1000);\n\n\n}\n\nhtml file:\n{{display}}\n\nif I put this in main app component and tried to go to different routing link it will reset the timer again and I don't want this to happened. Let us say main page is properties and second one is contacts, if i switch between then the timer resets again and again, I don't want that."} +{"id": "000079", "text": "I am trying to add a bootstrap toaster to my angular project. I am using angular 17 with standalone components.\nI am trying to follow what was done in this github repository but my project is standalone.\nRunning ng serve is successful but when I am trying to access http://localhost:4200/ I get the following error\n[vite] Internal server error: document is not defined\n at enableDismissTrigger (D:\\dev\\git-2.0\\base-templates\\angular\\angular-bootstrap-5-base\\node_modules\\bootstrap\\dist\\js\\bootstrap.esm.js:802:19)\n at eval (D:\\dev\\git-2.0\\base-templates\\angular\\angular-bootstrap-5-base\\node_modules\\bootstrap\\dist\\js\\bootstrap.esm.js:884:1)\n at async instantiateModule (file:///D:/dev/git-2.0/base-templates/angular/angular-bootstrap-5-base/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54758:9) (x6)\n\nhttps://stackblitz.com/~/github.com/Chrispie/code-problems on my issue-creating-bootstrap-5-toaster-with-angular-17-standalone branch:\n\nI added my sample project where I have reproduced the error to my github repo.\nAny idea as to what am I doing wrong here or what does it mean in this context?\nFor completeness sake I am outlining below what I have done.\nI created a new angular project with scss and added the following 2 dependencies\nnpm i bootstrap\nnpm i @types/bootstrap --save-dev\n\nA toast and a toaster component is added with a service and 2 models. The main app component has a few buttons to launch the toasts.\nWhen running I am expecting it to look something like this"} +{"id": "000080", "text": "I am trying to connect to an API through a service that I have created with Angular but when I enter the page, console browser returns the following error:\nmain.ts:5 ERROR NullInjectorError: R3InjectorError(Standalone[_HomeComponent])[_EventosService -> _EventosService -> _EventosService -> _HttpClient -> _HttpClient]: \n NullInjectorError: No provider for _HttpClient!\n at NullInjector.get (core.mjs:1654:27)\n at R3Injector.get (core.mjs:3093:33)\n at R3Injector.get (core.mjs:3093:33)\n at injectInjectorOnly (core.mjs:1100:40)\n at Module.\u0275\u0275inject (core.mjs:1106:42)\n at Object.EventosService_Factory [as factory] (eventos.service.ts:8:28)\n at core.mjs:3219:47\n at runInInjectorProfilerContext (core.mjs:866:9)\n at R3Injector.hydrate (core.mjs:3218:21)\n at R3Injector.get (core.mjs:3082:33)\n\nI am using Angular version 17 in my project and doing the imports in the same HomeComponent component. The affected files are the following.\nhome.component.ts\nimport { Component, OnInit } from '@angular/core';\nimport { EventosService } from '../../shared/services/eventos/eventos.service';\nimport { Observable } from 'rxjs';\nimport { CommonModule } from '@angular/common';\nimport { HttpClientModule } from '@angular/common/http';\n\n@Component({\n selector: 'app-home',\n standalone: true,\n templateUrl: './home.component.html',\n imports: [CommonModule, HttpClientModule],\n styleUrls: ['./home.component.sass']\n})\n\nexport class HomeComponent implements OnInit{\n\n constructor( private es: EventosService ) {}\n\n usuarios : any = {};\n \n ngOnInit(): void {\n this.es.getEventos().subscribe(data => {\n this.usuarios = data;\n });\n console.log(this.usuarios);\n }\n\n}\n\neventos.services.ts\nimport { Injectable } from '@angular/core';\nimport { HttpClient } from '@angular/common/http';\nimport { Observable } from 'rxjs';\n\n@Injectable(\n {providedIn: 'root'}\n)\nexport class EventosService {\n\n private baseUrl = \"https://localhost:7019/api\";\n\n constructor( private http: HttpClient ) { }\n\n public getEventos(): Observable {\n return this.http.get(`${this.baseUrl}/Usuarios`);\n }\n\n}\n\nWARNING: I am aware that the service is called \"Eventos\" and I am calling \"Usuarios\" in the API, it is to test that it returns me some result even if it is by console.\nINFO\nI am using Angular 17 with the default settings although the project is migrated from Angular 16. The API is made with .NET but with other tools I can connect well to it and launch requests that return results."} +{"id": "000081", "text": "I am using the command ng new to create a new angular project. When the project is created, though it runs properly, there is no angular module present in the project.\nThe ng new command is generating a project only with StandAlone components.\nI am using angular 17"} +{"id": "000082", "text": "I have a canDeactivateGuard which returns from component's MatDialog for unsaved action.\nMy problem is I am unable to test the functional guard and getting error -\nTypeError: Cannot read properties of undefined (reading 'canUserExitAlertDialog')\nHere is the Guard-\nimport { CanDeactivateFn } from '@angular/router';\nimport { Observable, map } from 'rxjs';\n\nexport const canDeactivateGuard: CanDeactivateFn = (\n component,\n route,\n state\n): Observable | boolean => {\n return component.canUserExitAlertDialog('back').pipe(\n map((result) => {\n console.log(result);\n return result === 'discard';\n })\n );\n};\n\nHere is my component's method which the guard is calling-\nNote: I have a mat-dialog which have two buttons - 'discard' and 'cancel'. On 'discard' click user is redirected to home page.\ncanUserExitAlertDialog(key: string): Observable { \n//I have a condition in the alert component based on this key\n if (this.hasFormSaved) { //if any changes are saved then not considered\n return of('discard');\n }\n const dialogConfig = new MatDialogConfig();\n dialogConfig.disableClose = true;\n dialogConfig.autoFocus = true;\n dialogConfig.data = { action: key, redirect: 'home' };\n if (this.wasFormChanged || this.form.dirty) {\n const dialogRef = this.dialog.open(AlertComponent, dialogConfig);\n return dialogRef.afterClosed();\n } else {\n this.dialog.closeAll();\n return of('discard');\n }\n }\n\nCode in Dialog Alert component:\nexport class AlertComponent {\n userAction!: any;\n alertMsg = 'Are you sure you want to discard the changes?';\n unsavedChanges = 'This will reload the page';\n constructor(\n @Inject(MAT_DIALOG_DATA) data: any,\n @Inject(Window) private window: Window,\n private dialogRef: MatDialogRef\n ) {\n this.userAction = data;\n }\n\n public onCancel(): void {\n this.dialogRef.close('cancel');\n }\n\n public onDiscard(): void {\n this.dialogRef.close('discard');\n if (this.userAction.action === 'something') { //key I passed from the main component\n console.log('do something');\n }\n }\n}\n\nFinally here is my code in CanDeactivate spec file-\ndescribe('canDeactivateGuard functional guard', () => {\n let nextState: RouterStateSnapshot;\n let component: MyComponent;\n beforeEach(() => {\n TestBed.configureTestingModule({\n providers: [\n {\n provide: ActivatedRoute,\n useValue: {\n snapshot: {},\n },\n },\n ],\n });\n });\n\n it('should be created', fakeAsync(() => {\n const activatedRoute = TestBed.inject(ActivatedRoute);\n const nextState = {} as RouterStateSnapshot;\n const currState = {} as RouterStateSnapshot;\n const guardResponse = TestBed.runInInjectionContext(() => {\n canDeactivateGuard(\n component,\n activatedRoute.snapshot,\n currState,\n nextState\n ) as Observable;\n });\n expect(guardResponse).toBeTruthy();\n }));\n\nI have tried to create a stub component and define the canUserExitAlertDialog method but didn't help.\nIs there another way to do this test successfully? AS per angular, class level deactivate guard is deprecated.\nError here-\nerror message\nTest Coverage-\nenter image description here"} +{"id": "000083", "text": "I have Angular 17 Project. I want to display some charts in it. So I have installed following modules in my project\n\nnpm i ag-charts-community\nnpm install ag-charts-angular\n\nI am trying to use existing example from AG Charts as written below:-\nTS File\nimport { Component, OnInit, OnDestroy} from '@angular/core';\nimport { AgChartsAngular } from \"ag-charts-angular\";\nimport { AgChartOptions } from \"ag-charts-community\";\n\nimport { CommonModule } from '@angular/common';\n\nimport { getData } from \"./data\";\n\n@Component({\n selector: 'view-survey',\n // templateUrl: './viewsurvey.component.html',\n template: ' ',\n standalone: true,\n imports: [\n CommonModule, \n AgChartsAngular, \n ], \n providers: [], \n})\n\nexport class ViewSurveyComponent implements OnInit, OnDestroy {\n public options:AgChartOptions;\n\n constructor() { \n this.options = {\n title: {\n text: \"Apple's Revenue by Product Category\",\n },\n subtitle: {\n text: \"In Billion U.S. Dollars\",\n },\n data: this.getData(),\n series: [\n {\n type: \"bar\",\n direction: \"horizontal\",\n xKey: \"quarter\",\n yKey: \"iphone\",\n yName: \"iPhone\",\n },\n {\n type: \"bar\",\n direction: \"horizontal\",\n xKey: \"quarter\",\n yKey: \"mac\",\n yName: \"Mac\",\n },\n {\n type: \"bar\",\n direction: \"horizontal\",\n xKey: \"quarter\",\n yKey: \"ipad\",\n yName: \"iPad\",\n },\n {\n type: \"bar\",\n direction: \"horizontal\",\n xKey: \"quarter\",\n yKey: \"wearables\",\n yName: \"Wearables\",\n },\n {\n type: \"bar\",\n direction: \"horizontal\",\n xKey: \"quarter\",\n yKey: \"services\",\n yName: \"Services\",\n },\n ],\n };\n\n }\n \n ngOnInit() { \n \n }\n\n ngOnDestroy() {\n \n }\n\n getData()\n {\nreturn [\n {\n quarter: \"Q1'18\",\n iphone: 140,\n mac: 16,\n ipad: 14,\n wearables: 12,\n services: 20,\n },\n {\n quarter: \"Q2'18\",\n iphone: 124,\n mac: 20,\n ipad: 14,\n wearables: 12,\n services: 30,\n },\n {\n quarter: \"Q3'18\",\n iphone: 112,\n mac: 20,\n ipad: 18,\n wearables: 14,\n services: 36,\n },\n {\n quarter: \"Q4'18\",\n iphone: 118,\n mac: 24,\n ipad: 14,\n wearables: 14,\n services: 36,\n },\n ];\n }\n}\n\nI am getting below error which I am not able to resolve\nERROR Error: AG Charts - unable to resolve global window\n at ChartOptions.specialOverridesDefaults (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/ag-charts-community/dist/package/main.esm.mjs:3534:13)\n at ChartOptions (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/ag-charts-community/dist/package/main.esm.mjs:3135:34)\n at Function.createOrUpdate (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/ag-charts-community/dist/package/main.esm.mjs:32475:26)\n at Function.create (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/ag-charts-community/dist/package/main.esm.mjs:32375:36)\n at eval (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/ag-charts-angular/fesm2020/ag-charts-angular.mjs:16:56)\n at _ZoneDelegate.invoke (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/zone.js/fesm2015/zone-node.js:368:26)\n at _Zone.run (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/zone.js/fesm2015/zone-node.js:130:43)\n\n at context (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/@angular/core/fesm2022/core.mjs:14320:28) \n at AgChartsAngular.runOutsideAngular (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/ag-charts-angular/fesm2020/ag-charts-angular.mjs:62:38)\n at AgChartsAngular.ngAfterViewInit (g:/onmytune/IP/InsightsGather.com/Angular17-InsightsGather/node_modules/ag-charts-angular/fesm2020/ag-charts-angular.mjs:16:23)\n\nCan you please help what am I missing here?"} +{"id": "000084", "text": "I would like to initialize some important values when my application starts (Angular v17).\napp.config.ts:\nexport const appConfig: ApplicationConfig = {\n providers: [\n ConfigService,\n ...\n {\n provide: APP_INITIALIZER,\n useFactory: (init: ConfigService) => init.load(),\n multi: true,\n deps: [ConfigService, HttpClient]\n }\n ]\n};\n\nconfig.service.ts:\n@Injectable({\n providedIn: 'root',\n})\nexport class ConfigService {\n private http = inject(HttpClient);\n \n private _config: any;\n private _user: AppUser;\n \n public getConfigUrl(key: string): string {\n return this._config.urls[key];\n }\n\n public load(): Promise {\n return new Promise((resolve, reject) => {\n this._user = new AppUser(); <-- normally a request to my node-express server\n this._config = 'test';\n resolve(true);\n });\n }\n}\n\nThen I got this error when I ran the application and did not understand why.\nERROR TypeError: appInits is not a function\n at \\_ApplicationInitStatus.runInitializers (core.mjs:31069:32)\n at core.mjs:34973:28\n at \\_callAndReportToErrorHandler (core.mjs:31146:24)\n at core.mjs:34971:20\n at \\_ZoneDelegate.invoke (zone.js:368:26)\n at Object.onInvoke (core.mjs:14424:33)\n at \\_ZoneDelegate.invoke (zone.js:367:52)\n at \\_Zone.run (zone.js:130:43)\n at \\_NgZone.run (core.mjs:14275:28)\n at internalCreateApplication (core.mjs:34948:23)"} +{"id": "000085", "text": "NG05100: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead.\n\nimport { Component, OnInit } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; import { Dropdown, DropdownItem, DropdownModule } from 'primeng/dropdown'; interface City { name: string; code: string; } @Component({ selector: 'app-dropdown', standalone: true, imports: [FormsModule,DropdownModule,BrowserAnimationsModule], templateUrl: './dropdown.component.html', styleUrl: './dropdown.component.css', providers: [] }) export class DropdownComponent implements OnInit{ cities: City[] | undefined; selectedCity: City | undefined; ngOnInit() { debugger; this.cities = [ { name: 'New York', code: 'NY' }, { name: 'Rome', code: 'RM' }, { name: 'London', code: 'LDN' }, { name: 'Istanbul', code: 'IST' }, { name: 'Paris', code: 'PRS' } ]; } } \n\ncore.mjs:6531 ERROR Error: NG05100: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead. at new _BrowserModule (platform-browser.mjs:1258:13) at Object.BrowserModule_Factory [as useFactory] (platform-browser.mjs:1282:14) at Object.factory (core.mjs:3322:38) at core.mjs:3219:47 at runInInjectorProfilerContext (core.mjs:866:9) at R3Injector.hydrate (core.mjs:3218:21) at R3Injector.get (core.mjs:3082:33) at injectInjectorOnly (core.mjs:1100:40) at \u0275\u0275inject (core.mjs:1106:42) at useValue (core.mjs:2854:73)\n\nERROR RuntimeError: NG05100: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead.\n at new _BrowserModule (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :29844:13)\n at Object.BrowserModule_Factory [as useFactory] (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :29868:10) \n at Object.factory (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3965:32)\n at eval (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3886:35)\n at runInInjectorProfilerContext (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2525:5)\n at R3Injector.hydrate (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3885:11)\n at R3Injector.get (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3778:23)\n at injectInjectorOnly (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2634:36)\n at \u0275\u0275inject (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2640:59)\n at useValue (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3610:67) {\n code: 5100\n}\n\nNo output file changes.\n\nApplication bundle generation complete. [0.878 seconds]\n\nERROR RuntimeError: NG05100: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead.\n at new _BrowserModule (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :29844:13)\n at Object.BrowserModule_Factory [as useFactory] (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :29868:10) \n at Object.factory (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3965:32)\n at eval (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3886:35)\n at runInInjectorProfilerContext (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2525:5)\n at R3Injector.hydrate (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3885:11)\n at R3Injector.get (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3778:23)\n at injectInjectorOnly (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2634:36)\n at \u0275\u0275inject (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2640:59)\n at useValue (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3610:67) {\n code: 5100\n}\nERROR RuntimeError: NG05100: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead.\n at new _BrowserModule (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :29844:13)\n at Object.BrowserModule_Factory [as useFactory] (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :29868:10) \n at Object.factory (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3965:32)\n at eval (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3886:35)\n at runInInjectorProfilerContext (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2525:5)\n at R3Injector.hydrate (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3885:11)\n at R3Injector.get (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3778:23)\n at injectInjectorOnly (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2634:36)\n at \u0275\u0275inject (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :2640:59)\n at useValue (eval at instantiateModule (file:///E:/Angular%20off/login-page/node_modules/vite/dist/node/chunks/dep-G-px366b.js:54755:28), :3610:67) {\n code: 5100\n}\n\n\nwe want to show data in dropdown"} +{"id": "000086", "text": "I have a select that controls a list that just doesn't seem to be working. The error I get in the console is\ncore.mjs:6531 ERROR Error: NG05105: Unexpected synthetic listener @transformPanel.done found. Please make sure that:\n\nEither BrowserAnimationsModule or NoopAnimationsModule are imported in your application.\n\nTypescript:\nimport { Component } from '@angular/core';\nimport { MatDividerModule } from '@angular/material/divider';\nimport { MatIconModule } from '@angular/material/icon';\nimport { DatePipe } from '@angular/common';\nimport { MatListModule } from '@angular/material/list';\nimport { MatSelectModule } from '@angular/material/select';\nimport { MatTooltipModule } from '@angular/material/tooltip';\nimport { provideAnimations } from '@angular/platform-browser/animations'; \nimport { MatFormFieldModule } from '@angular/material/form-field';\nimport { FormsModule } from '@angular/forms';\n\nexport interface legend {\n name: string;\n image: string;\n tooltip: string;\n}\nexport interface maplist {\n name: string;\n legends: Array\n}\n\n\n\n@Component({\n selector: 'app-legend',\n standalone: true,\n imports: [MatListModule, MatIconModule, MatDividerModule, DatePipe,\n MatSelectModule, MatTooltipModule, MatFormFieldModule, FormsModule, \n ],\n providers: [\n provideAnimations()\n ],\n templateUrl: './legend.component.html',\n styleUrl: './legend.component.css',\n})\nexport class LegendComponent {\n selectedValue: Array = [];\n\n maplistdata: maplist[] = [\n {\n name: \"Echocondria\",\n legends: [\n {\n name: \"House\",\n image: \"\",\n tooltip: \"Living quarters for the residents.\"\n },\n {\n name: \"Shop\",\n image: \"\",\n tooltip: \"Places to buy and sell goods.\"\n },\n ]\n },\n {\n name: \"Echocondria Sewers\",\n legends: [\n {\n name: \"House\",\n image: \"\",\n tooltip: \"Living quarters for the residents.\"\n },\n {\n name: \"Shop\",\n image: \"\",\n tooltip: \"Places to buy and sell goods.\"\n },\n ]\n },\n ]\n}\n\nHTML:\n \n Select an option\n \n None\n @for (legend of maplistdata; track legend) {\n {{legend.name}}\n }\n \n \n\n @for (legenditem of selectedValue; track legenditem) {\n \n folder\n
{{ legenditem.name }}
\n
{{legenditem.tooltip}}
\n
\n }\n\n\nI've tried importing the modules in question but still get the same error or an error that the module is already included."} +{"id": "000087", "text": "In the previous *ngIf directives we were able to chain multiple async operations to not repeat the async pipe in the template but to reuse them later as one subscription, now I want to use the same behavior using the new @if syntax provided in Angular 17, is it possible?\nOld way:\n\n Show something\n\n}"} +{"id": "000088", "text": "My folder structure\nI don't understand why the rendering is on the server side but the components are placed on the client because after building I see the components are in the browser folder. am I making a mistake here?\nI tried turning off javascript to check if the components are being rendered on the server side, the result is that after turning off js, the website is still displayed, which means the website is still rendered on the server side."} +{"id": "000089", "text": "I am using Angular version 17 cli\n
\n\nI get this error:\n Can't bind to 'ngIf' since it isn't a known property of 'div' (used in the \n'CategoriesStyleThreeComponent' component template).\nIf the 'ngIf' is an Angular control flow directive, please make sure that either the \n'NgIf' directive or the 'CommonModule' is included in the '@Component.imports' of this component.\n\nThis is the app.module:\n import { HttpClientModule } from '@angular/common/http'; \nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations';\nimport { BrowserModule } from '@angular/platform-browser';\nimport { NgxScrollTopModule } from 'ngx-scrolltop';\nimport { NgModule } from '@angular/core';\nimport { AppRoutingModule } from './app-routing.module';\nimport { CommonModule } from '@angular/common'; \nimport {CategoriesStyleThreeComponent} from './components/common/categories-style-three/categories-style-three.component'\n\n @NgModule({\n declarations: [],\n imports: [\n BrowserModule,\n AppRoutingModule,\n BrowserAnimationsModule,\n NgxScrollTopModule,\n HttpClientModule,\n CommonModule\n ],\n providers: [],\n bootstrap: [],\n exports: [CategoriesStyleThreeComponent]\n })\n export class AppModule { }\n\nand my component code:\n import { Component, OnInit } from '@angular/core';\n import { ThemeCustomizerService } from '../theme-customizer/theme-customizer.service';\n import { RouterLink } from '@angular/router';\n import { CategoryService } from '../../../services/category.service';\n import { Category } from '../../../models/category.model';\n\n\n@Component({\nselector: 'app-categories-style-three',\nstandalone: true,\nimports: [RouterLink],\ntemplateUrl: './categories-style-three.component.html',\nstyleUrls: ['./categories-style-three.component.scss']\n})\nexport class CategoriesStyleThreeComponent implements OnInit {\ntrendingCategories: Category[];\nnumCategories: number = 5;\n\nisToggled = false;\n\nconstructor(public themeService: ThemeCustomizerService,\n private categoryService: CategoryService) {\n this.themeService.isToggled$.subscribe(isToggled => {\n this.isToggled = isToggled;\n });\n}\n\ntoggleTheme() {\n this.themeService.toggleTheme();\n}\n\nngOnInit(): void { \n this.loadTrendingCategories();\n}\n\nloadTrendingCategories(): void {\n this.categoryService.getTrendingCategories()\n .subscribe(categories => {\n this.trendingCategories = categories;\n console.log(categories);\n });\n }\n\n }"} +{"id": "000090", "text": "I have a website that is a Video Gallery of my Youtube, and this videos are builded in iframe tag.\nBut I want use some loading in this for loop, because when I open my page the load is slow.\nWhat I do?\nMy Code:\n`@for (link of youtubeLinks; track $index) {\n
\n \n
\n }`\n\nI used a @defer directive, but doesn't worked.\nI wanted each iframe to be loaded as the page scrolled, or to each one to be loaded at a time, not all of them as soon as they entered the page."} +{"id": "000091", "text": "We have updated our project into Angular 17.\nI got this ERROR TypeError: this.router.events.filter is not a function In console.\nHere Is the app.components.ts file code\nimport { AfterViewInit, Component, OnInit, Inject, PLATFORM_ID } from '@angular/core';\nimport { isPlatformBrowser } from '@angular/common';\nimport { Router, ActivatedRoute, NavigationEnd, RouterModule } from '@angular/router';\nimport { environment } from \"../environments/environment\";\nimport { Title, Meta } from '@angular/platform-browser';\nimport { WINDOW } from '@ng-toolkit/universal';\nimport { Idle, DEFAULT_INTERRUPTSOURCES } from '@ng-idle/core';\nimport { CookieService } from 'ngx-cookie-service';\nimport { NgIdleKeepaliveModule } from '@ng-idle/keepalive';\nimport { HttpClientModule } from '@angular/common/http';\nimport { LoadingComponent } from './shared/components/loading/loading.component';\nimport { DataService } from './shared/service/data.service';\nimport { LoaderService } from './shared/service/loader.service';\nimport { SeoService } from './shared/service/seo.service';\n\ndeclare const ga: any;\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css'],\n standalone: true,\n imports: [RouterModule, NgIdleKeepaliveModule, LoadingComponent, HttpClientModule],\n providers: [{ provide: WINDOW, useValue: {} }]\n})\n\nexport class AppComponent implements OnInit, AfterViewInit {\n title = 'motor-happy';\n serverName: String;\n idleState = 'Not started.';\n timedOut = false;\n lastPing?: Date = null;\n title1 = 'angular-idle-timeout';\n expiredDate: Date;\n constructor(@Inject(WINDOW) private window: any,\n @Inject(PLATFORM_ID) private platformId: Object,\n public router: Router,\n public activatedRoute: ActivatedRoute,\n public titleService: Title,\n public meta: Meta,\n public cookieService: CookieService,\n public dataService: DataService,\n public loaderSVC: LoaderService,\n public seoService: SeoService,\n public idle: Idle,\n ) { \n\n idle.setIdle(600);\n idle.setTimeout(600);\n idle.setInterrupts(DEFAULT_INTERRUPTSOURCES);\n\n idle.onTimeout.subscribe(() => {\n this.timedOut = true;\n localStorage.removeItem('authorisationToken');\n this.router.navigate(['/']);\n });\n this.reset();\n this.serverName = environment.name;\n\n this.activatedRoute.queryParams.subscribe(params => {\n let utm_CampaignSource = params['utm_CampaignSource'];\n let getCookie = this.cookieService.get('utm_fetch');\n\n if (utm_CampaignSource && (!getCookie)) {\n const dateNow = new Date();\n dateNow.setMinutes(dateNow.getMinutes() + 30);\n this.cookieService.set('utm_fetch', \"Yes\", dateNow, '', '', true);\n }\n });\n\n }\n\n reset() {\n this.idle.watch();\n this.timedOut = false;\n }\n\n ngAfterViewInit(): void {\n this.router.events.subscribe(event => {\n if (event instanceof NavigationEnd && isPlatformBrowser(this.platformId)) {\n if (this.serverName == \"production\") {\n ga('set', 'page', event.urlAfterRedirects);\n ga('send', 'pageview');\n }\n }\n });\n }\n\n ngOnInit() {\n this.seoService.createLinkForCanonicalURL();\n this.router.events.subscribe((evt) => {\n this.loaderSVC.startLoading(); \n if (!(evt instanceof NavigationEnd)) {\n return;\n }\n setTimeout(() => {\n this.loaderSVC.stopLoading(); \n });\n if (isPlatformBrowser(this.platformId)) {\n window.scrollTo(0, 0);\n\n const userAgent = window.navigator.userAgent;\n sessionStorage.setItem('userAgent', userAgent);\n sessionStorage.setItem('ipAddress', '0.0.0.0');\n }\n });\n\n this.router.events\n .filter((event) => event instanceof NavigationEnd)\n .map(() => this.activatedRoute)\n .map((route) => {\n while (route.firstChild) route = route.firstChild;\n return route;\n })\n .filter((route) => route.outlet === 'primary')\n .mergeMap((route) => route.data)\n .subscribe((event) => {\n this.updateDescription(event['description'], event['keywords'], event['title']);\n });\n }\n\n updateDescription(desc: string, keywords: string, title: string) {\n if (title) {\n this.titleService.setTitle(title);\n }\n if (desc) {\n this.meta.updateTag({ name: 'description', content: desc })\n }\n /*if(keywords){\n this.meta.updateTag({ name: 'keywords', content: keywords })\n }*/\n }\n}\n\nWe have tried to update\nthis.router.events.filter\nwith\nthis.router.events.pipe(filter(event => event instanceof NavigationEnd)\nBut don't know how to map after that"} +{"id": "000092", "text": "X [ERROR] Expected $config.color.primary to be a valid M3 palette.\nangular 17\nI can't find the expected valid M3 palette for @angular/material-experimental anywhere.\ntryied this in styles.scss:\n@use \"@angular/material\" as mat;\n@use \"@angular/material-experimental\" as matx;\n\n$primary-palette: (\n 0: #000000,\n 10: #191027,\n 20: #2b1a41,\n 25: #3d245c,\n 30: #4f2f76,\n 35: #9257c5,\n 40: #654083,\n 50: #79539b,\n 60: #9d80b7,\n 70: #c1aed3,\n 80: #e5dbef,\n 90: #e5ddee,\n 95: #e9e3ef,\n 98: #ebe7ef,\n 99: #efecf1,\n 100: #ffffff,\n);\n\n$m3-custom-theme: (\n color: (\n primary: $primary-palette\n ),\n);\n\n$light-theme: matx.define-theme($m3-custom);\n\nhtml,\nbody {\n height: 100%;\n @include mat.button-theme($light-theme);\n}"} +{"id": "000093", "text": "I am having different levels of closure using *ngFor directive and the new @for block. I have a parent component that creates multiple child components using a for loop:\n \n \n \n\nIn the child component I accept the [activeProblem] input as either a ProblemGroup | NarrativeProblem class. In order to effectively distinguish between the two and render a view. I create two instance variables\n problemGroup?: ProblemGroup;\n narrativeProblem?: NarrativeProblem;\n\n ngOnInit(): void {\n if (this.activeProblem instanceof ProblemGroup) {\n this.problemGroup = this.activeProblem as ProblemGroup;\n } else {\n this.narrativeProblem = this.activeProblem;\n }\n }\n\nWithin this same view I have a click event handler that toggles the value under the problem group (essentially mutating the object).\n \n\nIf I have rendered the child component using @for when assigning activeProblem to problemGroup it loses it's reference, effectively creating a copy of the object. So any mutations I do in the child component is not reflected in the original object.\nIf I have rendered the child component using *ngFor directive, then it works as expected, any mutations I make to problemGroup effectively points to activeProblem.\nI am hoping I can get some clarity on why this is happening?"} +{"id": "000094", "text": "I'm using Angular 17 with the following code:\ndatabase.component.html\n@for(user of (users | userPipe:filters); track user.id) {\n \n {{ user.name }}\n {{ user.surname }}\n {{ user.age }}\n \n}\n@empty {\n \n Empty\n \n}\n\nfilters is a string array with the keywords for filtering the matched database entries.\ndatabase.pipe.ts\n@Pipe({\n name: 'userPipe',\n pure: false\n})\nexport class databasePipe implements PipeTransform {\n transform(values: Users[], filters: string[]): Users[] {\n \n if (!filters || filters.length === 0 || values.length === 0) {\n return values;\n }\n\n return values.filter((value: User) => {\n filters.forEach(filter => {\n const userNameFound = value.name.toLowerCase().indexOf(filter.toLowerCase()) !== -1;\n const userSurnameFound = value.surname.toLowerCase().indexOf(filter.toLowerCase()) !== -1;\n const ageFound = value.age.toLowerCase().indexOf(filter.toLowerCase()) !== -1;\n\n if (userNameFound || userSurnameFound || ageFound) {\n \n console.log(\"value: \", value);\n return value;\n }\n return \"\";\n });\n });\n }\n}\n\nIt is working and I can see matching entries with value: in the browser console just fine but my filtered table just returns \"Empty\" and no data is shown.\nDoes anyone know why this happens?"} +{"id": "000095", "text": "We are using Angular 17 along with Reactive forms in our project.\nWe have written a custom directive which formats the output to US phone number format 111-222-3333.\nWhat we are seeing is that when someone tries to copy a number into the field - the field gets formatted, but the validator is still saying it is not valid.\nHTML Code:\n \n\nTypescript Code:\nphone: new FormControl(null, [Validators.pattern('^[0-9]{3}-[0-9]{3}-[0-9]{4}$')])\n\nCustom Directive Code:\nimport {Directive, HostListener} from '@angular/core';\n\n@Directive({\n selector: '[phoneFormatterDirective]'\n})\nexport class PhoneFormatterDirective {\n\n @HostListener('input', ['$event'])\n onKeyDown(event: KeyboardEvent) {\n event.preventDefault();\n const input = event.target as HTMLInputElement;\n console.log(input.value);\n let trimmed = input.value\n .replaceAll('-','')\n .replaceAll('(','')\n .replaceAll(')','')\n .replaceAll(/\\s+/g, '');\n if (trimmed.length > 12) {\n trimmed = trimmed.substr(0, 12);\n }\n let numbers = [];\n numbers.push(trimmed.substr(0,3));\n if(trimmed.substr(3,2)!==\"\")\n numbers.push(trimmed.substr(3,3));\n if(trimmed.substr(6,3)!=\"\")\n numbers.push(trimmed.substr(6,4));\n\n input.value = numbers.join('-');\n console.log(numbers.join(\"-\"));\n }\n}\n\nLet's say I am trying to paste (555) 123-1234 - the value gets formatted to 555-123-1234, but the input says it is still invalid.\nIt would be valid if I deleted one character and then wrote it manually - which is a kind of strange behavior."} +{"id": "000096", "text": "\n \n {{ item.label }}\n \n \n \n @for (item of items; track:item.id) { \n \n }\n\n {{data}} \n\ncomponent-a.html\n \n\ncomponent-b.html\n\n\nAm Trying to pass ngTemplate by using below code\n \n\nand using inside all the components that am using inside Items"} +{"id": "000097", "text": "I am creating an Angular project and having the following error, which makes my page unable to load:\nERROR NullInjectorError: R3InjectorError(Standalone[_AppComponent])[ActivatedRoute -> ActivatedRoute -> ActivatedRoute]:\nNullInjectorError: No provider for ActivatedRoute!\nSince it is Angular v17, I do not have AppModule on it.\nThis is my following code:\napp.config.ts\nimport { ApplicationConfig } from '@angular/core';\nimport { provideRouter } from '@angular/router';\n\nimport { routes } from './app.routes';\n\nexport const appConfig: ApplicationConfig = {\n providers: [provideRouter(routes)]\n};\n\n\napp.component.ts\nimport { Component, LOCALE_ID } from '@angular/core';\nimport { registerLocaleData } from '@angular/common';\nimport { RouterLink, RouterLinkActive, RouterOutlet } from '@angular/router';\nimport { SharedModule } from './shared/shared.module';\nimport localePT from '@angular/common/locales/pt';\n\nregisterLocaleData(localePT);\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [SharedModule, RouterOutlet, RouterLink],\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css'],\n providers: [{provide: LOCALE_ID, useValue: 'pt-br'}]\n})\nexport class AppComponent {\n title = \"app\";\n}\n\n\napp.routes.ts\nimport { Routes } from '@angular/router';\nimport { HomeComponent } from './features/pages/home/home.component';\n\nexport const routes: Routes = [\n { path: 'home', component: HomeComponent },\n { path: '', redirectTo: 'home', pathMatch: 'full' }\n];\n\nThank you in advance!\nI tried adding RouterModule.forRoot(routes) in AppComponent, but no success.\nI tried to import ActivatedRoute on AppComponent, but it throwed another error: ERROR Error: NG0204: Can't resolve all parameters for ActivatedRoute: (?, ?, ?, ?, ?, ?, ?, ?).\nI also tried to import ActivateRoute in AppComponent as the following, but it returns: ERROR Error: NG04002: Cannot match any routes. URL Segment: 'home'\nimport { Component, LOCALE_ID } from '@angular/core';\nimport { registerLocaleData } from '@angular/common';\nimport { ActivatedRoute, RouterLink, RouterOutlet } from '@angular/router';\nimport { SharedModule } from './shared/shared.module';\nimport localePT from '@angular/common/locales/pt';\nimport { routes } from './app.routes';\n\nregisterLocaleData(localePT);\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [SharedModule, RouterOutlet, RouterLink],\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css'],\n providers: [{provide: LOCALE_ID, useValue: 'pt-br'}, {provide: ActivatedRoute, useValue: routes}]\n})\nexport class AppComponent {\n title = \"app\";\n}\n\n\nSince Angular v17 does not have AppModule and does not need it, I did not create it.\nI am expecting my page to load properly, it is not allowing it to load."} +{"id": "000098", "text": "I'm developing an Angular application and encountering an issue with localStorage in my AuthService. When attempting to use localStorage to store the user's email for authentication purposes, I'm getting an error: \"localStorage is not defined\".\nHere's a simplified version of my AuthService:\n//auth.service.ts\n\nimport { Injectable } from '@angular/core';\nimport { Router } from '@angular/router';\nimport { BehaviorSubject } from 'rxjs';\n\n@Injectable({\n providedIn: 'root',\n})\nexport class AuthService {\n public isAuth = new BehaviorSubject(false);\n\n constructor(private router: Router) {\n this.autoSignIn();\n }\n\n autoSignIn() {\n if (localStorage.getItem('email')) {\n this.isAuth.next(true);\n this.router.navigate(['/dashboard']);\n }\n }\n\n signIn(email: string) {\n localStorage.setItem('email', email);\n this.isAuth.next(true);\n this.router.navigate(['/dashboard']);\n }\n\n signOut() {\n localStorage.removeItem('email');\n this.isAuth.next(false);\n this.router.navigate(['/auth']);\n }\n}\n\nI've imported the necessary modules, including Injectable from @angular/core, and I'm using localStorage within my service methods."} +{"id": "000099", "text": "I am trying to convert my code directives from Angular 16 to Angular 17. However, I am unable to achieve the reference in Angular 17 so that for both else it will refer to the same ng-template.\n
\n
0; else noTracks\">\n
Favorites
\n
\n
\n

Recent

\n
{{movies[movies.length-1].title}}\n -\n {{movies[movies.length-1].rating}}
\n
\n
\n

Total

\n
{{ movies.length}}
\n
\n
\n
\n
\n\n\n
Favorites
\n
\n
\n

Recent

\n
NA
\n
\n
\n

Total

\n
0
\n
\n
\n
\n\nIn Angular 17 I am trying to achieve the above using @if and @else without using || cause for type never error I will get. Further, I want how we can achieve this.\nNote: Don't combine @if statements.\n@if (movies) {\n @if (movies.length>0) {\n
Favorites
\n
\n
\n

Recent

\n
{{movies[movies.length-1].title}}\n -\n {{movies[movies.length-1].rating}}
\n
\n
\n

Total

\n
{{ movies.length}}
\n
\n
\n } @else {\n
Favorites
\n
\n
\n

Recent

\n
NA
\n
\n
\n

Total

\n
0
\n
\n
\n }\n}"} +{"id": "000100", "text": "I created centralized error handling service using BehaviorSubject in Angular v 17.It does not working in the expected way!\nThe problem areas are :\nNotificationService --> the centralized error handler.\nNotificationComponent --> Reusable User Friendly Error and Progress message showing popup modal,i directly added it in my Appcomponent.\nSurrender Pet Component --> Where i try to use the Notification for showing option.\nI thik those BhaviourSubjects not emitting the data the way i expected\nNotificationService:\n import { Injectable } from '@angular/core';\n import { BehaviorSubject, Subject } from 'rxjs';\n\n @Injectable({\n providedIn: 'root'\n })\n export class NotificationService {\n successMessageSubject = new BehaviorSubject(null);\n errorMessageSubject = new BehaviorSubject(null);\n\n successMessageAction$ = this.successMessageSubject.asObservable();\n errorMessageAction$ = this.errorMessageSubject.asObservable();\n\n setSuccessMessage(message: string) {\n this.successMessageSubject.next(message);\n }\n\n setErrorMessage(message: string) {\n this.errorMessageSubject.next(message);\n console.log(this.errorMessageSubject.getValue());\n }\n\n clearSuccessMessage() {\n this.successMessageSubject.next(null);\n }\n\n clearErrorMessage() {\n this.errorMessageSubject.next(null);\n }\n\n clearAllMessages() {\n this.clearSuccessMessage();\n this.clearErrorMessage();\n }\n }\n\n\nNotificationComponent :\nimport { Component, OnInit, inject } from '@angular/core';\nimport { NotificationService } from '../../../core/services/notifiaction/notification.service';\nimport { AsyncPipe, CommonModule, NgIf } from '@angular/common';\nimport { tap } from 'rxjs';\n\n@Component({\n selector: 'app-notification',\n standalone: true,\n imports: [NgIf,AsyncPipe,CommonModule],\n templateUrl: './notification.component.html',\n styleUrl: './notification.component.scss',\n \n})\nexport class NotificationComponent implements OnInit {\n \n private notificationService:NotificationService = inject(NotificationService);\n \n \n successMessage$ = this.notificationService.successMessageAction$.pipe(\n tap((message)=>{\n if(message){\n console.log('clicked')\n setTimeout(()=>{\n this.notificationService.clearAllMessages() \n },5000)\n }\n })\n )\n\n errorMessage$ = this.notificationService.errorMessageAction$.pipe(\n tap((message)=>{\n console.log(message);\n if(message){\n console.log('clicked')\n setTimeout(()=>{\n this.notificationService.clearAllMessages() \n },5000)\n }\n })\n )\n\n\n ngOnInit(): void {\n console.log(\"initialized\")\n }\n }\n\nSurrender Pet Component\nimport { FormControl, FormGroup, ReactiveFormsModule, Validators } from '@angular/forms';\nimport { RouterLink } from '@angular/router';\nimport { ButtonComponent } from '../../../shared/components/button/button.component';\nimport { NgClass, NgIf, } from '@angular/common';\nimport { Component, inject } from '@angular/core';\nimport { HttpClientModule } from '@angular/common/http';\nimport { PetsAdopteService } from '../../../core/services/pets-adopte/pets-adopte.service';\nimport { SurrenderPet } from '../../../core/models/surrenderPet.model';\nimport { NotificationService } from '../../../core/services/notifiaction/notification.service';\n\n@Component({\n selector: 'app-surrender-pet',\n standalone: true,\n imports: [\n ReactiveFormsModule,\n NgClass,\n RouterLink,\n NgIf,\n ButtonComponent,\n HttpClientModule\n ],\n providers:[PetsAdopteService,NotificationService],\n templateUrl: './surrender-pet.component.html',\n styleUrl: './surrender-pet.component.scss'\n})\nexport class SurrenderPetComponent {\n\n private petAdopteService=inject(PetsAdopteService);\n private notificationService=inject(NotificationService);\n\n submitted:boolean = false;\n\n registerPet= new FormGroup({\n name: new FormControl('',[Validators.required]),\n phoneNo:new FormControl('',[Validators.required]),\n petType:new FormControl ('',[Validators.required]),\n location:new FormControl('',[Validators.required]),\n otherDetails:new FormControl('',[Validators.required])\n })\n\n onSubmit(){\n this.submitted = true;\n if(this.registerPet.valid){\n this.petAdopteService.sendPetSurrender_Request(this.registerPet.value as SurrenderPet).subscribe(\n {\n next:(data)=>{\n console.log(data);\n }\n \n }\n )\n }\n }\n}\n\n\nNotification Component html template:\n
\n
\n
\n \n \n \n Success icon\n
\n
{{successMessage}}
\n \n
\n\n
\n
\n \n \n \n Success icon\n
\n
{{errorMessage}}
\n \n
\n
\n \n\n\n\nThe PetsAdopteService where i called the setMessages functions!\n\n import { HttpClient } from '@angular/common/http';\n import { Injectable, inject } from '@angular/core';\n import { EMPTY, Observable, catchError, tap } from 'rxjs';\n import { SurrenderPet } from '../../models/surrenderPet.model';\n import { environment } from '../../../../environments/environment.development';\n import { petsSurrenderEndpoints } from '../../constants/APIEndPoints/petsAdopte.EndPoints';\n import { NotificationService } from '../notifiaction/notification.service';\n\n @Injectable({\n providedIn: 'root'\n })\n export class PetsAdopteService {\n\n constructor() { }\n\n private http:HttpClient=inject(HttpClient);\n private notificationService=inject(NotificationService);\n\n sendPetSurrender_Request(payload:SurrenderPet):Observable{\n return this.http.post (environment.apiUrl+petsSurrenderEndpoints?.createSurrenderRequest,payload).pipe(\n tap((data)=>{\n this.notificationService.setSuccessMessage('Your request sent successfully')\n }),\n catchError((error)=>{\n console.log(error)\n this.notificationService.setErrorMessage(\"eROR\");\n return EMPTY;\n })\n )\n }\n }\n\n\n\nIs there any way to fix this problem with out choosing signal, because i want to learn more about rxjs!"} +{"id": "000101", "text": "Application runs perfectly fine with npm start, when it is built using ng build, it gives the following errors:\n\u25b2 [WARNING] Module 'amazon-quicksight-embedding-sdk' used by './dashboard.component.ts' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'qrcode' used by 'node_modules/@aws-amplify/ui-angular/fesm2020/aws-amplify-ui-angular.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'ts-access-control' used by './permission.service.ts' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'style-dictionary/lib/utils/deepExtend.js' used by 'node_modules/@aws-amplify/ui/dist/esm/theme/createTheme.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'style-dictionary/lib/utils/flattenProperties.js' used by 'node_modules/@aws-amplify/ui/dist/esm/theme/createTheme.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'lodash/kebabCase.js' used by 'node_modules/@aws-amplify/ui/dist/esm/theme/utils.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'style-dictionary/lib/utils/references/usesReference.js' used by 'node_modules/@aws-amplify/ui/dist/esm/theme/utils.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'google-libphonenumber' used by './phone.service.ts' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module '@aws-crypto/sha256-js' used by 'node_modules/@aws-amplify/auth/dist/esm/providers/cognito/apis/signOut.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'lodash/pickBy.js' used by 'node_modules/@aws-amplify/ui/dist/esm/machines/authenticator/utils.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\n\n\u25b2 [WARNING] Module 'lodash/merge.js' used by 'node_modules/@aws-amplify/ui/dist/esm/validators/index.mjs' is not ESM\n\n CommonJS or AMD dependencies can cause optimization bailouts.\n For more information see: https://angular.io/guide/build#configuring-commonjs-dependencies\n\nI am willing to provide more information required to fix this issue."} +{"id": "000102", "text": "I use angular 17\nthis is my app.routes.ts\nexport const routes: Routes = [\n { path: '', pathMatch: 'full', component: HomeComponent },\n { path: 'editors', component: EditorsComponent },\n { path: 'partners', component: PartnersComponent },\n { path: 'investors', component: InvestorsComponent },\n { path: 'telecoms', component: TelecomsComponent },\n { path: 'institutional', component: InstitutionalComponent },\n { path: 'universities', component: UniversitiesComponent },\n { path: 'influencers', component: InfluencersComponent },\n { path: 'ambassador', component: AmbassadorComponent },\n];\n\nand my app.config.ts\nexport function HttpLoaderFactory(http: HttpClient) {\n return new TranslateHttpLoader(http, '/assets/i18n/', '.json');\n}\n\nconst scrollConfig: InMemoryScrollingOptions = {\n scrollPositionRestoration: 'top',\n anchorScrolling: 'enabled',\n};\n\nconst inMemoryScrollingFeature: InMemoryScrollingFeature =\n withInMemoryScrolling(scrollConfig);\n\n\nexport const appConfig: ApplicationConfig = {\n providers: [\n provideRouter(routes, inMemoryScrollingFeature),\n provideLottieOptions({\n player: () => import('lottie-web'),\n }),\n provideAnimations(),\n provideHttpClient(),\n TranslateModule.forRoot({\n defaultLanguage: 'en',\n loader: {\n provide: TranslateLoader,\n useFactory: HttpLoaderFactory,\n //useFactory: (http: HttpClient) => new CustomTranslateLoader(http),\n deps: [HttpClient]\n }\n }).providers!, provideClientHydration(), provideClientHydration(), provideClientHydration() ],\n};\n\nwhen I try to go on the page http://localhost/editor i got lot 404 error for example http://localhost:4200/editors/assets/animations/editor_anim1.json 404\nAngular add the name of the page in the path of the ressource. This path should be http://localhost:4200/assets/animations/editor_anim1.json 404\nWhen I from home then go on page editors and come back on home, the url is http://localhost:4200/editors and when i go to editor again, now the url is localhost:4200/editors/editors\nWhat's wrong with my routing config ?"} +{"id": "000103", "text": "The problem basically is that when I'm logged into the dashboard, every time I reload the browser page it renders the login component for an instant\nTokenService\nexport class TokenService {\n isAuthentications: BehaviorSubject = new BehaviorSubject(false);\n constructor(@Inject(PLATFORM_ID) private platformId: Object) { \n const token = this.getToken();\n if(token){\n this.updateToken(true)\n }\n }\n\n setToken(token: string){\n this.updateToken(true);\n localStorage.setItem('user', token)\n }\n updateToken(status: boolean){\n this.isAuthentications.next(status)\n }\n getToken(): string | null{\n if (typeof window !== 'undefined' && window.sessionStorage) {\n return localStorage.getItem('user');\n }\n\n return null\n }\n}\n\nAuthGuard\nexport const authGuard: CanActivateFn = (route, state) =\\> {\nconst tokenService = inject(TokenService)\nconst router = inject(Router)\n// tokenService.isAuthentications.subscribe({\n// next: (v) =\\> {\n// if(!v){\n// router.navigate(['/login'])\n// }\n// }\n// })\n// return true;\n\nreturn tokenService.isAuthentications.pipe(map( (user) =\\> {\nif(!user){\nreturn router.createUrlTree(['/login']);\n}else{\nreturn true\n}\n}))\n};\n\nRoutes\nexport const routes: Routes = [\n { path: 'login', component: LoginComponent },\n { path: '', redirectTo: 'login', pathMatch: 'full'},\n {path: '' , component: LayoutComponent, children: [\n {path: 'dashboard', component: DashboardComponent, canActivate: [authGuard] }\n ]}\n];\n\ngif that shows the problem\nI've tried other other approaches on how to secure the route however, whenever my guard should redirect to 'login' it has this behavior"} +{"id": "000104", "text": "I'm encountering an error in my Angular application's template file (dashboard.component.html). The error message is \"Object is possibly 'undefined'.\" Here's the relevant part of the template causing the issue:\n
\n

Dashboard

\n\n
    \n @for(post of posts; track post.title){\n @if(post.id%2==0){\n
  • {{post.id}}-{{post.title}}
  • \n }\n }\n
\n
\n\n// dashboard.component.ts\n\nimport { Component } from '@angular/core';\nimport { AuthService } from '../auth.service';\n\n@Component({\n selector: 'app-dashboard',\n standalone: true,\n imports: [],\n templateUrl: './dashboard.component.html',\n styleUrl: './dashboard.component.scss',\n})\n\nexport class DashboardComponent {\n email = localStorage.getItem('email');\n posts:any;\n constructor(private authService: AuthService) { \n this.GetAllPosts();\n }\n\n signOut() {\n this.authService.signOut();\n }\n\n GetAllPosts(){\n this.authService.getAllPosts().subscribe((res)=>{\n this.posts = res;\n })\n }\n}\n\nThe error specifically points to line 10, where I'm trying to iterate over posts using an @for loop and check if post.id % 2 == 0. However, TypeScript is flagging this as a potential error because posts might be undefined or null.\nHow can I modify this template code to handle the possibility that posts might be undefined while avoiding this error?"} +{"id": "000105", "text": "In my case ,\nI have following block of code,\nIn which, i can store the info value in the *ngIf expression block and use in the template, how to do the same with newly introduced @if syntax in angular 17 ?\nI couldn't find any samples / docs that can do the same using @if syntax.\n
\n \"Empty\n
\n \n
\n
"} +{"id": "000106", "text": "Could anyone advise after Angular update up to 18 I got:\nAn unhandled exception occurred: (0 , os_1.availableParallelism) is not a function\nIn angular-errors.log\n[error] TypeError: (0 , os_1.availableParallelism) is not a function at Object. (C:\\Users\\Zendbook\\Documents\\DILAU\\tracker-users-web\\node_modules\\piscina\\dist\\src\\index.js:37:54) at Module._compile (node:internal/modules/cjs/loader:1218:14) at Module._extensions..js (node:internal/modules/cjs/loader:1272:10) at Module.load (node:internal/modules/cjs/loader:1081:32) at Module._load (node:internal/modules/cjs/loader:922:12) at Module.require (node:internal/modules/cjs/loader:1105:19) at require (node:internal/modules/cjs/helpers:103:18) at Object. (C:\\Users\\Zendbook\\Documents\\DILAU\\tracker-users-web\\node_modules\\piscina\\dist\\src\\main.js:5:33) at Module._compile (node:internal/modules/cjs/loader:1218:14) at Module._extensions..js (node:internal/modules/cjs/loader:1272:10)\nHow to fix ?\ntry to find answer in google."} +{"id": "000107", "text": "I have configured klaro cookie package in my Angular project with all the required configurations, but I want to change the icon besides count of service:\n\nI want \"+\" and \"-\" icons accordingly when service section is open/close. Is there any way to customize it as per need?\nOld icon doesn't fade away:"} +{"id": "000108", "text": "After upgrading my Angular core libraries to version 18, I migrated to Angular Material 18 by running:\nng update @angular/material\nThe update went smoothly but when I tried to compile my app I got the following error:\nX [ERROR] Undefined function.\n \u2577\n14 \u2502 $myapp-theme-primary: mat.define-palette(mat.$indigo-palette, A400, A100, A700);\n \u2502 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n \u2575\n src\\styles.scss 14:23 root stylesheet [plugin angular-sass]\n\n angular:styles/global:styles:2:8:\n 2 \u2502 @import 'src/styles.scss';\n \u2575 ~~~~~~~~~~~~~~~~~\n\nMy styles.scss worked perfectly with the previous version of Angular Material (v.17). It looks as follows:\n@use '@angular/material' as mat;\n@include mat.core();\n\n$myapp-theme-primary: mat.define-palette(mat.$indigo-palette, A400, A100, A700);\n$myapp-theme-accent: mat.define-palette(mat.$indigo-palette);\n$myapp-theme-warn: mat.define-palette(mat.$red-palette);\n\n$myapp-theme: mat.define-light-theme((\n color: (\n primary: $myapp-theme-primary,\n accent: $myapp-theme-accent,\n warn: $myapp-theme-warn,\n )\n));\n\n@include mat.all-component-themes($myapp-theme);\n\nHow do I have to adapt my code in styles.scss in order to make it work with Angular Material 18?"} +{"id": "000109", "text": "I want to use the new @for syntax together with @empty to either show the user a table or some text telling there is no data.\nWith ngFor I usually checked the length of the data array. If not empty:\n\nAdd the table header\nngFor the data\nAdd the table footer\n\nWith the newer syntax I hoped to be able to combine those 3 steps above into the @for itself like this:\n@for(order of licenseOverview.orders; track order.id; let firstRow = $first; let lastRow = $last) {\n @if(firstRow) {\n \n \n \n \n\n @if(lastRow) {\n ...
{{ order.reference }}{{ ... }}
\n }\n}\n@empty {

No data for you!

}\n\nI expected this to just compile and render the table, but it seems like Angular can't handle this. Is there a way to get this to work?\nEDIT: The error I get looks like this:\n @if(firstRow) {\n \n \n \n \n \n \n \n \n \n } ---> Unexpected closing block"} +{"id": "000110", "text": "Since the upgrade to angular18, I'm having timeout with simple component\n[vite] Internal server error: Page /guide did not render in 30 seconds.\n at Timeout. (C:\\Users\\mbagi\\Developer\\xxx\\angular-client\\node_modules\\@angular\\build\\src\\utils\\server-rendering\\render-page.js:90:90)\n at Timeout.timer (c:/Users/mbagi/Developer/xxx/angular-client/node_modules/zone.js/fesm2015/zone-node.js:2320:21)\n at _ZoneDelegate.invokeTask (c:/Users/mbagi/Developer/xxx/angular-client/node_modules/zone.js/fesm2015/zone-node.js:459:13)\n at _ZoneImpl.runTask (c:/Users/mbagi/Developer/xxx/angular-client/node_modules/zone.js/fesm2015/zone-node.js:226:35)\n at invokeTask (c:/Users/mbagi/Developer/xxx/angular-client/node_modules/zone.js/fesm2015/zone-node.js:540:14)\n at Timeout.ZoneTask.invoke (c:/Users/mbagi/Developer/xxx/angular-client/node_modules/zone.js/fesm2015/zone-node.js:524:33)\n at Timeout.data.args. (c:/Users/mbagi/Developer/xxx/angular-client/node_modules/zone.js/fesm2015/zone-node.js:2301:23)\n at listOnTimeout (node:internal/timers:573:17)\n at process.processTimers (node: internal/timers:514:7\n\nThe component is used to render an image once every second, it takes into parameters the list of images\nimport { Component, OnInit, signal } from '@angular/core';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n template: `\n @if(img(); as img){\n \n }\n `,\n styles: '',\n})\nexport class AppComponent implements OnInit {\n imgs = ['https://cdn.pixabay.com/photo/2023/09/02/03/15/water-8228076_1280.jpg',\n 'https://media.istockphoto.com/id/157482223/de/foto/water-splash.jpg?s=2048x2048&w=is&k=20&c=tovlRmZEzpSmlXEL9OH8iANIK2w16YQD8QDDtsmxs3U=',\n 'https://media.istockphoto.com/id/157482222/de/foto/gefrorene-tropfen-wasser.jpg?s=2048x2048&w=is&k=20&c=ASd2SEWIz7EEiSQLeCdrf6zA-eR9ExAyFCzZLG1tXco='];\n\n img = signal('');\n intervalId:any;\n pointer=0;\n\n ngOnInit(): void {\n this.img.set(this.imgs[0]);\n this.startImageRotation();\n }\n\n startImageRotation(): void {\n this.intervalId = setInterval(() => {\n this.pointer = (this.pointer + 1) % this.imgs.length;\n this.img.set(this.imgs[this.pointer]);\n }, 1000);\n }\n}\n\nThe code is working if I load another page and I navigate to this page, if I try to refresh the page directly there is a TimeOut. However it always fails if I try to run ng build. This page doesn't let me build to the project\nHow to reproduce the issue in few steps with ANGULAR 18 :\nng new test\n\nUpdate the app.compenent.ts\nimport { Component, OnInit, signal } from '@angular/core';\n\n@Component({\n selector: 'app-root',\n standalone: true,\n template: `\n @if(img(); as img){\n \n }\n `,\n styles: '',\n})\nexport class AppComponent implements OnInit {\n imgs = `['https://cdn.pixabay.com/photo/2023/09/02/03/15/water-8228076_1280.jpg',\n 'https://media.istockphoto.com/id/157482223/de/foto/water-splash.jpg?s=2048x2048&w=is&k=20&c=tovlRmZEzpSmlXEL9OH8iANIK2w16YQD8QDDtsmxs3U=',\n 'https://media.istockphoto.com/id/157482222/de/foto/gefrorene-tropfen-wasser.jpg?s=2048x2048&w=is&k=20&c=ASd2SEWIz7EEiSQLeCdrf6zA-eR9ExAyFCzZLG1tXco=']`;\n\n img = signal('');\n intervalId:any;\n pointer=0;\n\n ngOnInit(): void {\n this.img.set(this.imgs[0]);\n this.startImageRotation();\n }\n\n startImageRotation(): void {\n this.intervalId = setInterval(() => {\n this.pointer = (this.pointer + 1) % this.imgs.length;\n this.img.set(this.imgs[this.pointer]);\n }, 1000);\n }\n}\n\nthen Serve\nng serve\n\nYou should have an error like that"} +{"id": "000111", "text": "I have tried to use the deploy url with localhost:4100 but this did not change the port of the final builded server.mjs file in the dist folder\nI would like to to have a different port than 4000 when i run the server.mjs file on a remote virtual machine"} +{"id": "000112", "text": "I upgraded to Angular 18 (and adjusted the theming styles to the Material 3 SCSS API), but I can't figure out how to define typography scale levels (font sizes) with the new API. It used to be done like this:\n$my-custom-typography-config: mat.m2-define-typography-config(\n $headline-1: mat.m2-define-typography-level(112px, 112px, 300, $letter-spacing: -0.05em),\n $headline-2: mat.m2-define-typography-level(56px, 56px, 400, $letter-spacing: -0.02em),\n $headline-3: mat.m2-define-typography-level(45px, 48px, 400, $letter-spacing: -0.005em),\n $headline-4: mat.m2-define-typography-level(34px, 40px, 400),\n $headline-5: mat.m2-define-typography-level(24px, 32px, 400),\n // ...\n);\n\nBut i can't find anything similar in the new theming docs. The best I've found is this: https://material.angular.io/guide/typography#type-scale-levels but it doesn't provide an example.\nHow can I do this?"} +{"id": "000113", "text": "I uploaded my project to angular 18 (also material v.18) and the styles of my palette theme have changed and I cannot deploy my project.\n@use 'SASS:map';\n@use '@angular/material' as mat;\n\n\n$md-primary: (\n 50 : #fee6fe,\n 100 : #fcbffd,\n 200 : #fa95fb,\n 300 : #f76bf9,\n 400 : #f64bf8,\n 500 : #f42bf7,\n 600 : #f326f6,\n 700 : #f120f5,\n 800 : #ef1af3,\n 900 : #ec10f1,\n A100 : #ffffff,\n A200 : #feebff,\n A400 : #fdb8ff,\n A700 : #fc9eff,\n contrast: (\n 50 : #000000,\n 100 : #000000,\n 200 : #000000,\n 300 : #000000,\n 400 : #000000,\n 500 : #ffffff,\n 600 : #ffffff,\n 700 : #ffffff,\n 800 : #ffffff,\n 900 : #ffffff,\n A100 : #000000,\n A200 : #000000,\n A400 : #000000,\n A700 : #000000,\n )\n);\n\n\n$md-secondary: (\n 50 : #f6e1ff,\n 100 : #eab3ff,\n 200 : #dc80ff,\n 300 : #cd4dff,\n 400 : #c327ff,\n 500 : #b801ff,\n 600 : #b101ff,\n 700 : #a801ff,\n 800 : #a001ff,\n 900 : #9100ff,\n A100 : #ffffff,\n A200 : #f9f2ff,\n A400 : #e0bfff,\n A700 : #d4a6ff,\n contrast: (\n 50 : #000000,\n 100 : #000000,\n 200 : #000000,\n 300 : #000000,\n 400 : #ffffff,\n 500 : #ffffff,\n 600 : #ffffff,\n 700 : #ffffff,\n 800 : #ffffff,\n 900 : #ffffff,\n A100 : #000000,\n A200 : #000000,\n A400 : #000000,\n A700 : #000000,\n )\n);\n\n//GLOBAL\n\n$my-primary: mat.m2-define-palette($md-primary, 500);\n$my-secondary: mat.m2-define-palette($md-secondary, 500);\n$my-theme: mat.m2-define-light-theme((\n color: (\n primary: $my-primary,\n accent: $my-secondary,\n )\n));\n@include mat.all-component-themes($my-theme);\n\n$color-config: mat.get-color-config($my-theme);\n$primary-palette: map.get($color-config, 'primary');\n$primary: mat.get-theme-color($primary-palette, 500);\n$secondary: mat.get-theme-color($accent-palette, 500);\n$light-secondary: mat.get-theme-color($accent-palette, 300);\n$light-primary: mat.get-theme-color($primary-palette, 300);\n\n//PALETTE BASICS\n$light-grey: rgb(228, 228, 228);\n$grey: #252525;\n$secondary-text: #525252;\n$black: rgb(20, 20, 20);\n\n\n:root {\n --primary: #{$primary};\n --secondary: #{$secondary};\n --light-secondary: #{$light-secondary};\n --light-primary: #{$light-primary};\n --light-grey: #{$light-grey};\n --grey: #{$grey};\n --secondary-text: #{$secondary-text};\n --black: #{$black};\n}\n\n\n\n\nI have tried changing the variables to m2-(example) but I get the error: 'Hue \"500\" does not exist in palette. Available hues are: 0, 10, 20, 25, 30, 35, 40, 50, 60, 70, 80, 90, 95, 98, 99, 100, secondary, neutral, neutral-variant, error'"} +{"id": "000114", "text": "I am upgrading my Angular 17 application to Angular 18 and want to migrate to the new application builder.\nI am using ng update @angular/core@18 @angular/cli@18 and opted in to the new application builder when I was asked. Next, I updated the angular.json file so that the browser build's location is using dist/project-x instead of dist/project-x/browser as suggested by the update process:\n\nThe output location of the browser build has been updated from dist/project-x to dist/project-x/browser. You might need to adjust your deployment pipeline or, as an alternative, set outputPath.browser to \"\" in order to maintain the previous functionality.\n\nHere is an extract of my angular.json file:\n{\n \"$schema\": \"./node_modules/@angular/cli/lib/config/schema.json\",\n \"version\": 1,\n \"newProjectRoot\": \"projects\",\n \"projects\": {\n \"project-x\": {\n // ...\n \"architect\": {\n \"build\": {\n \"builder\": \"@angular-devkit/build-angular:application\",\n \"options\": {\n \"outputPath\": {\n \"base\": \"dist/project-x\",\n \"browser\": \"\"\n },\n // ...\n },\n // ...\n \"configurations\": {\n // ...\n \"development\": {\n // ...\n \"outputPath\": {\n \"base\": \"dist/project-x\",\n \"browser\": \"\"\n }\n }\n // ...\n\nng build, ng build --configuration development and ng build --configuration production works as expected.\nHowever, when overriding the output path in the command line, then it does not work as expected.\nThe command below, will create a folder browser in /projects/project-x-backend/:\nng build --base-href=/x/ --output-path=/projects/project-x-backend/wwwroot \\\n --watch --configuration development --verbose\n\nHow can I get rid of the browser folder when using ng build --watch with a custom output path? (I would like to avoid setting the output path for the development configuration to /projects/project-x-backend/wwwroot in angular.json itself.)"} +{"id": "000115", "text": "What is the meaning of the error and how to fix it\n\nIf '' is an Angular component, then verify that it is part of this module.\nIf '' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA'\n\nError Message is :\nError: src/app/app.component.html:29:4 - error NG8001: 'router-outlet' is not a known element:\n1. If 'router-outlet' is an Angular component, then verify that it is part of this module.\n2. If 'router-outlet' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message.\n\n29 \n ~~~~~~~~~~~~~~~\n\n src/app/app.component.ts:7:16\n 7 templateUrl: './app.component.html',\n ~~~~~~~~~~~~~~~~~~~~~~\n Error occurs in the template of component AppComponent.\n\n\nError: src/app/app.component.html:31:4 - error NG8001: 'progress-bar' is not a known element:\n1. If 'progress-bar' is an Angular component, then verify that it is part of this module.\n2. If 'progress-bar' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message.\n\n31 \n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n src/app/app.component.ts:7:16\n 7 templateUrl: './app.component.html',\n ~~~~~~~~~~~~~~~~~~~~~~\n Error occurs in the template of component AppComponent.\n\n\nError: src/app/app.module.ts:26:5 - error NG6001: Cannot declare 'TableModule' in an NgModule as it's not a part of the current compilation.\n\n26 TableModule,\n ~~~~~~~~~~~\n\n node_modules/primeng/table/table.d.ts:1397:22\n 1397 export declare class TableModule {\n ~~~~~~~~~~~\n 'TableModule' is declared here.\n\napp.module.ts - Here I tried\n\nadding the schema but no luck.\nI have imported all the necessary packages\nImported formsmodule and reactiveformsmodule but no luck\n\nimport { NgModule } from '@angular/core';\nimport { BrowserModule } from '@angular/platform-browser';\nimport { CommonModule } from '@angular/common';\nimport { AppRoutingModule } from './app-routing.module';\nimport { AppComponent } from './app.component';\nimport { FormsModule, ReactiveFormsModule } from '@angular/forms';\nimport { ProgressBarComponent } from './_helpers/progress-bar/progress-bar.component';\nimport { CitationDashboardComponent } from './modules/citation-dashboard/citation-dashboard.component';\nimport { TableFilterPipe } from './pipes/tabular-filter.pipe';\nimport { GenericHttpService } from 'src/app/config/GenericHttp/generic-http.service'\nimport { NgSelectModule } from \"@ng-select/ng-select\";\nimport { HttpClientModule } from '@angular/common/http';\nimport { TableModule } from 'primeng/table'; \nimport { httpInterceptProviders } from './http-interceptors/auth-index';\nimport { HomePageComponent } from './modules/home-page/home-page.component';\n\n\n// import { NgModule, CUSTOM_ELEMENTS_SCHEMA } from '@angular/core';\n// import { NO_ERRORS_SCHEMA,CUSTOM_ELEMENTS_SCHEMA } from '@angular/core';\n\n@NgModule({\n declarations: [\n AppComponent,\n ProgressBarComponent,\n TableFilterPipe,\n TableModule,\n CitationDashboardComponent,\n HomePageComponent\n ],\n imports: [\n BrowserModule,\n FormsModule,\n ReactiveFormsModule,\n CommonModule,\n NgSelectModule,\n HttpClientModule,\n AppRoutingModule\n ],\n // schemas: [\n // CUSTOM_ELEMENTS_SCHEMA,\n // NO_ERRORS_SCHEMA\n // ],\n providers: [GenericHttpService, httpInterceptProviders],\n bootstrap: [AppComponent]\n})\nexport class AppModule { }\n\n\nI started facing this issue after installing primeng. Now I uninstalled primeng but the issue still persists. Attaching package.json for package version details\n{\n \"name\": \"ermapplications\",\n \"version\": \"0.0.0\",\n \"scripts\": {\n \"ng\": \"ng\",\n \"start\": \"ng serve\",\n \"build\": \"ng build\",\n \"watch\": \"ng build --watch --configuration development\",\n \"test\": \"ng test\"\n },\n \"private\": true,\n \"dependencies\": {\n \"@angular/animations\": \"^16.0.0\",\n \"@angular/common\": \"^16.0.0\",\n \"@angular/compiler\": \"^16.0.0\",\n \"@angular/core\": \"^16.0.0\",\n \"@angular/forms\": \"^16.0.0\",\n \"@angular/platform-browser\": \"^16.0.0\",\n \"@angular/platform-browser-dynamic\": \"^16.0.0\",\n \"@angular/router\": \"^16.0.0\",\n \"@ng-bootstrap/ng-bootstrap\": \"^15.0.0\",\n \"@ng-select/ng-option-highlight\": \"^11.1.1\",\n \"@ng-select/ng-select\": \"^11.0.0\",\n \"@types/file-saver\": \"^2.0.7\",\n \"bootstrap\": \"^5.2.0\",\n \"fontawesome\": \"^5.6.3\",\n \"hammerjs\": \"^2.0.8\",\n \"jquery\": \"^3.6.1\",\n \"polyfills\": \"^2.1.1\",\n \"popper.js\": \"^1.16.1\",\n \"primeicons\": \"^7.0.0\",\n \"primeng\": \"^16.9.1\",\n \"tslib\": \"^2.3.0\",\n \"zone.js\": \"~0.13.0\"\n },\n \"devDependencies\": {\n \"@angular-devkit/build-angular\": \"^16.0.0\",\n \"@angular/cli\": \"~16.0.0\",\n \"@angular/compiler-cli\": \"^16.0.0\",\n \"@types/jasmine\": \"~4.3.0\",\n \"@types/jquery\": \"^3.5.14\",\n \"@types/node\": \"^12.11.1\",\n \"jasmine-core\": \"~4.6.0\",\n \"karma\": \"~6.4.0\",\n \"karma-chrome-launcher\": \"~3.2.0\",\n \"karma-coverage\": \"~2.2.0\",\n \"karma-jasmine\": \"~5.1.0\",\n \"karma-jasmine-html-reporter\": \"~2.0.0\",\n \"rxjs\": \"^6.5.3\",\n \"typescript\": \"^4.9.3\",\n \"webpack-dev-server\": \"^4.15.1\"\n },\n \"description\": \"This project was generated with [Angular CLI](https://github.com/angular/angular-cli) version 16.0.0.\",\n \"main\": \"index.js\",\n \"keywords\": [],\n \"author\": \"\",\n \"license\": \"ISC\"\n}"} +{"id": "000116", "text": "I am using with angular 13\nhttps://github.com/bbc/slayer\n\nIts a commonjs lib and has no @types.I Managed to make it work with angular 13 (a while back) but now with the vite compiler I just dont know how.\nI added \"types\": [\"node\"] to tsconfig.json and tsconfig.app.json\nadd declared a type.d.ts\ndeclare module 'slayer';\nbut nothing works\nReferenceError: process is not defined\n at node_modules/slayer/node_modules/readable-stream/lib/_stream_writable.js\n\nHow can I import properly a commonjs module in my angluar 18 app ?\nThanks"} +{"id": "000117", "text": "Are there any differences or advantages or proper way to do this?\nLet's say I have an observable, I may receive it from backend call, a service or through a GUI event like scrolling event.\nI have a property in the template that depends on that observable. I am planning to provide the value to that property through a signal.\nSo I want to transfer the value to that signal through my observable, whenever it receives a value.\nI found two ways to provide value to a signal through an observable:-\n\nBy modifying the value of the signal inside the subscribe method of the observable.\nBy converting the observable to a signal and directly using that signal.\n\nMinimal Example Demonstrating Both Ways:-\nExample On Stackblitz\nimport { Component, EventEmitter, OnDestroy, OnInit, Signal, signal } from '@angular/core';\nimport { toSignal } from \"@angular/core/rxjs-interop\";\n\n@Component({\n selector: 'app-app',\n standalone: true,\n imports: [],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent implements OnInit {\n\n // First Way\n nonToSignalButtonClick$ = new EventEmitter();\n signalVar = signal(-1);\n\n // Second Way\n toSignalButtonClick$ = new EventEmitter();\n signalVarThroughToSignal = toSignal(this.toSignalButtonClick$, {initialValue: -1});\n\n ngOnInit() {\n // First Way\n this.nonToSignalButtonClick$.subscribe((v) => this.signalVar.update(initial => initial + v));\n }\n\n onDirectClick() {\n this.signalVar.update(initial => initial + 1);\n }\n\n // First Way\n onObservableClick() {\n this.nonToSignalButtonClick$.emit(1);\n }\n\n // Second Way\n onObservableToSignalClick() {\n this.toSignalButtonClick$.emit(this.signalVarThroughToSignal() + 1);\n }\n}\n\n

{{ signalVar() }}

\n\n\n\n
\n\n\n\n
\n\n

Through toSignal: {{ signalVarThroughToSignal() }}

\n\n\n\nAre there any advantages or differences between using one approach over the other? Thanks!"} +{"id": "000118", "text": "I have migrated my app to zoneless thanks to provideExperimentalZonelessChangeDetection() and having a mix of signals and Observables +AsyncPipe.\nDo I still need the OnPush ChangeDetection Strategy ?"} +{"id": "000119", "text": "The Angular team just implemented the new @let syntax in templates. According to this comment it's implemented in this commit, which should already be released in version 18.0.2\n\n\n\n\nI updated my NX workspace to use @angular/compiler 18.0.2 (npm update)\n\nHowever it's still not working. I'm still getting the following error:\nX [ERROR] NG5002: Incomplete block \"let showSpan\". If you meant to write the @ character, you should use the \"@\" HTML entity instead. [plugin angular-compiler]\n\n libs/example-ng-bootstrap/calendar/src/calendar.component.html:32:16:\n 32 \u2502 @let showSpan = (day !== null) && day.isInMonth;\n \u2575 ~~~~~~~~~~~~~~\n\n Error occurs in the template of component BsCalendarComponent.\n\n libs/example-ng-bootstrap/calendar/src/calendar.component.ts:14:15:\n 14 \u2502 templateUrl: './calendar.component.html',\n \u2575 ~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nSo what am I doing wrong?"} +{"id": "000120", "text": "The archivement is to use a base component where I would be able to render a default list of items or a customized one, to do so I've create a base component that has the mat-select-list in it like this:\n\n \n @for (item of items(); track $index) {\n {{ item.name }}\n } \n \n\n\nThe base mat-list-option has as the value the { id: item.id, name: item.name } and as the content {{ item.name }}, when I use this base component without passing any ng-content so by using the default one, I have no issues and the component behave as it has to.\nThe issue is when I try to use a custom in ng-content in a component like this:\n@Component({\n selector: 'app-product-select-modal',\n standalone: true,\n imports: [BaseSelectModalComponent, MatListOption],\n template: `\n \n @for (item of items(); track $index) {\n \n
{{ item.product.name }}
\n {{ item.key }}\n
\n }\n
\n `,\n})\nexport class LicenseSelectModalComponent {\n...\n}\n\nI just get NullInjectorError: NullInjectorError: No provider for InjectionToken SelectionList! error when the base-select-modal is trying to render.\nI've tried yet to import MatListModule even in the LicenseSelectModalComponent but it has no effect.\nHere is the base-select-modal:\n@Component({\n selector: 'app-base-select-modal',\n standalone: true,\n imports: [MatDialogModule, MatButton, MatFormField, MatInput, MatLabel, MatListModule, ReactiveFormsModule, MatProgressSpinnerModule],\n templateUrl: './base-select-modal.component.html',\n styleUrl: './base-select-modal.component.scss',\n changeDetection: ChangeDetectionStrategy.OnPush\n})\nexport class BaseSelectModalComponent {\n...\n}"} +{"id": "000121", "text": "I'm doing a project in Angular 18, I want to use the httpClientModule but it tells me that it is deprecated when I want to import it directly into a component. Likewise, when I want to import the httpClient within the imports of the same component it tells me component imports must be standalone components, directives, pipes, or must be NgModules.\nI was investigating and it said that the solution is to put the provideHttpClient() function within the providers of the app.module.ts file but in my case I don't have that file, I only have the app.config.ts and app.config.server.ts In which of both should I put it?\nThe content of both files are this:\n//app.config.ts\n\nimport { ApplicationConfig, provideZoneChangeDetection } from '@angular/core';\nimport { provideRouter, withComponentInputBinding } from '@angular/router';\nimport { routes } from './app.routes';\nimport { provideClientHydration } from '@angular/platform-browser';\nimport { provideHttpClient } from '@angular/common/http';\n\nexport const appConfig: ApplicationConfig = {\n providers: [provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes, withComponentInputBinding()), provideClientHydration()]\n}; \n\n//app.config.server.ts\nimport { mergeApplicationConfig, ApplicationConfig } from '@angular/core';\nimport { provideServerRendering } from '@angular/platform-server';\nimport { appConfig } from './app.config';\nimport { provideHttpClient } from '@angular/common/http';\n\nconst serverConfig: ApplicationConfig = {\n providers: [\n provideServerRendering(),\n ]\n};\n\nexport const config = mergeApplicationConfig(appConfig, serverConfig);"} +{"id": "000122", "text": "I'm read all the Amplify Gen 2 Documentation but I don't find how to list all registered users in application.\nIt's because need to create a admin page to list all users with his roles in Angular.\nI think that probably can do this with lambda functions or something like that but I don't find nothing about that.\nThanks for all!\nI'm read all the documentation: https://docs.amplify.aws/angular/build-a-backend/auth/connect-your-frontend/"} +{"id": "000123", "text": "Can't bind to 'ngModel' since it isn't a known property of 'textarea'\ncomponent.html\n\n\n

{{ newPost }}

\n\ncomponent.ts\nimport { Component } from '@angular/core';\n@Component({\n selector: 'app-post-create',\n standalone: true,\n templateUrl: './post-create.component.html',\n styleUrl: './post-create.component.css'\n})\nexport class PostCreateComponent {\nnewPost=\"NO content Here\";\nenteredValue='';\nonSavePost(){\n this.newPost = this.enteredValue;\n}\n}\n\napp.component.ts\nimport { Component } from '@angular/core';\nimport { RouterOutlet } from '@angular/router';\nimport { PostCreateComponent } from './post/post-create/post-create.component';\nimport { FormsModule } from '@angular/forms';\n@Component({\n selector: 'app-root',\n standalone: true,\n imports: [RouterOutlet,FormsModule,PostCreateComponent],\n templateUrl: './app.component.html',\n styleUrl: './app.component.css'\n})\nexport class AppComponent {\n title = 'Project_ng_demo';\n}\n\nhere is not app.module.ts file in agnular 18\nimport FormsModule in app.component.ts.\nimport { FormsModule } from '@angular/forms';"} +{"id": "000124", "text": "After upgrading my application to Angular 18.0.4, my test classes say:\n'HttpClientTestingModule' is deprecated. Add provideHttpClientTesting() to your providers instead.\nTherefore I adapted my code as follows:\n await TestBed.configureTestingModule(\n {\n imports: [\n AssetDetailsComponent,\n ],\n providers: [\n // replacement for HttpClientTestingModule:\n provideHttpClientTesting() \n ]\n })\n .compileComponents();\n\nHowever, when I run the tests, I get the following error:\nNullInjectorError: R3InjectorError(Standalone[AssetDetailsComponent])[InventoryActionService -> InventoryActionService -> _HttpClient -> _HttpClient]:\n NullInjectorError: No provider for _HttpClient!\n\nIf I use provideHttpClient() instead of provideHttpClientTesting() it works, yet I doubt that this is best practice. What is the correct solution to this issue?"} +{"id": "000125", "text": "I have a simple routing guard in my Angular application that uses \"@angular/ssr\": \"^18.0.5\" which check for the \"redirect\" query param.\nexport const redirectGuard: CanActivateFn = (_, state): boolean | UrlTree => {\n const hasRedirect = state.root.queryParams['redirect'];\n const router = inject(Router);\n if(!hasRedirect) {\n return router.createUrlTree(['/error']);\n }\n\n return true;\n};\n\nThe app.routes.ts:\nexport const APP_ROUTES: Routes = [\n {\n path: '',\n canActivate: [redirectGuard],\n loadComponent: () =>\n import('./features/login/login.component').then((m) => m.LoginComponent),\n },\n ...\n {\n path: 'error',\n loadComponent: () =>\n import('./features/error/error.component').then((m) => m.ErrorComponent),\n }\n]\n\nWhen I try to reach the application like: http://localhost:4000/?redirect=https:%2F%2FXXX.XXX-XXX.XXX.XXX%2F the page behave like this:\n\nThe error page is shown for a while and then the guard process and show the LoginComponent."} +{"id": "000126", "text": "I have an interceptor that adds the access token to every request. The only problem is, the only way I can get this access token is by using a function that returns a Promise:\n async getToken(): Promise {\nI HAVE to use this function to get the tokens. There is no other way. I have tried making the authInterceptor function async to no avail.\nSo how do I use this in a function based interceptor like so?\nimport { Inject, inject } from '@angular/core';\nimport { AuthService } from './auth.service';\nimport { HttpInterceptorFn } from '@angular/common/http';\n\nexport const authInterceptor: HttpInterceptorFn = (req, next) => {\n const authService = Inject(AuthService)\n const authToken = await authService.getToken(); // how to do this?? \n console.log(authToken);\n\n // Clone the request and add the authorization header\n const authReq = req.clone({\n setHeaders: {\n Authorization: `Bearer ${authToken}`\n }\n });\n\n // Pass the cloned request with the updated header to the next handler\n return next(authReq);\n};\n\nPlease help."} +{"id": "000127", "text": "It's any way to create component with cli to select the files of generate component? Or it can only set in the template in editors.\nSometimes, I need to create a simple component and i want to create a standalone component with one ts file,\nHowever, when i use ng g c --standalone AComponent, It will create html, scss, spec, ts four file;"} +{"id": "000128", "text": "i cannot import FormsModule in task.component.ts file under task folder .Due to unable to solve error as \"Can't bind to 'ngModel' since it isn't a known property of 'input'.ngtsc(-998002)\" .My code is\n\nin task.component.html file.Pls help to resolve this error."} +{"id": "000129", "text": "I need to use a programmatic stylebox overwrite, but for some reason it's not working. It says it's been overwritten, but continues to point to the default stylebox.\nHere is some test code:\nvar panel = PanelContainer.new()\nprint(panel.has_theme_stylebox_override(\"normal\"), panel.get_theme_stylebox(\"normal\", \"normal\"))\n \nvar stylebox = StyleBoxFlat.new()\nstylebox.bg_color = Color.from_string(\"#000000a0\", Color.BLACK)\nstylebox.corner_radius_bottom_left = 10\npanel.add_theme_stylebox_override(\"normal\", stylebox)\nprint(panel.has_theme_stylebox_override(\"normal\"), panel.get_theme_stylebox(\"normal\", \"normal\"))\n\nHere is the output:\nfalse\ntrue\n\nAs you can see, despite being overwritten the stylebox is still pointing to the default for the theme."} +{"id": "000130", "text": "So, I was following a tutorial for making my enemy move towards my player, and there comes a part of the script where its calls a \"Move_Slide\" so the enemy can move, but I get the error.\nHeres my full code:\nextends CharacterBody2D\n\n# Movement speed\nvar speed = 100 \nvar player_position\nvar target_position\n# Get a reference to the player. \n@onready var player = get_parent().get_node(\"Player\")\n \nfunc _physics_process(delta):\n \n # Set player_position to the position of the player node\n player_position = player.position\n # Calculate the target position\n target_position = (player_position - position).normalized()\n \n \n if position.distance_to(player_position) > 1:\n move_and_slide(target_position * speed)\n look_at(player_position)\n\nthe error is located at the part of the script \"move_and_slide(target_position * speed)\"\nI've looked on Stack Overflow for this problem and I found a answer but this is for if i call a motion witch I don't so that doesn't help me."} +{"id": "000131", "text": "I want to position one node to another, but they are under different parent nodes. How can I set one of the node's position to the other's, without interfering with the parent nodes' positions? Btw, the node3d with the position i'm trying to find has a random position\ni've tried Node3D.global_position() and Node3D.global_translate (AI told me to try it). I know how to set global position, just not find it."} +{"id": "000132", "text": "Recently I installed godot and decided to make a game but there was a problem:\nI added Area2d for character for collision detection but the area2d sprite is kind of sick because it is overlapping the ground see:\nSee it is over lapping\nPlease solve\nPlease solve it is very important\nFor @TheJalfireKnight\nHere You Go"} +{"id": "000133", "text": "I tried to make a project be in fullscreen in godot4. These are my settings:\n\nand this is my main scene:\n\nas you can see, I want the shape to be in the middle of the screen. But when I play the game, this happens:\nwhy is the picture not in the center?\nand why when I click on the green button (the resize button) it resizes and works perfectly fine. How do I fix this? I want it to work fine in fullscreen."} +{"id": "000134", "text": "How do I detect mouse clicks/mouse events in an Area2D's script? Do I use the func _process(delta) function? Let's say I have an area 2D called area. So what should the script be like? I want something like this:\nextends Area2D\n\n\nfunc _process(delta):\n if mouse_touching && mouse_left_down:\n print(\"clicked on object\")"} +{"id": "000135", "text": "Because my fullscreen godot game would be avaliable on many platforms (e.g. mac, windows) so I have to make sure that everytime my game launches on it's first time, it checks the OS and device it's on and sets the correct resolution for the device. How do I do this? I want something like this:\nfunc _ready():\n device = project.get_device()\n project_settings.set_setting(\"window_width\", device.width)\n project_settings.set_setting(\"window_height\", device.height)\n project_settings.save()\n\nnote that the code I wrote is not actual GDscript, it's just an example of something I want."} +{"id": "000136", "text": "This multiplayer RPG game, which i make, have a lots of problems :). This time, the problew is that, when the host place down a item with the my grid system, in remote tab(at Editor where you can see what is going on with the nodes when you ran the program) under the PlayerBuildings node nothing appears (in game the item is visible only for host) but when the client build sth is apear in remote tab but again is visible only for client!!!. Also i get this error\n\nget_node: Node not found:\"World/PlayerBuildings/@Camp1@3/MultiplayerSynchronizer\" process_simplify_path: Condition \"node == nullptr\" is true.\n\nHave anyone any idea??\n\nAnd here is my grid base code:\nextends Node2D\n\n@onready var camp_fire = preload(\"res://src/Scenes/camp.tscn\")\nvar tile_size =16\n\nenum {OBSTACTLE, COLLECTABLE,RESOURCE}\nvar grid_size = Vector2(160,160)\nvar grid = []\n\n\nfunc _ready():\n for x in range(grid_size.x):\n grid.append([])\n for y in range(grid_size.y):\n grid[x].append(null)\n var positions = []\n for i in range (50):\n var xcoor = (randi() % int(grid_size.x))\n var ycoor = (randi() % int(grid_size.y))\n var grid_pos = Vector2(xcoor,ycoor)\n if not grid_pos in positions:\n positions.append(grid_pos)\n\nfunc _input(event):\n if GameManager.enable:\n if event.is_action_pressed(\"LeftClick\"):\n var mouse_pos = get_global_mouse_position()\n var multiX = int(round(mouse_pos.x)/tile_size)\n var numX = multiX*tile_size\n var multiY = int(round(mouse_pos.y)/tile_size)\n var numY = multiY*tile_size\n var new_pos = Vector2(multiX, multiY)\n var new_camp_fire = camp_fire.instantiate()\n new_camp_fire.name = str(new_camp_fire.name + str(multiplayer.get_unique_id()) )\n new_camp_fire.set_position(tile_size*new_pos)\n grid[multiX][multiY] = OBSTACTLE\n get_tree().root.get_node(\"World\").get_node(\"PlayerBuildings\").add_child(new_camp_fire)"} +{"id": "000137", "text": "The following scripts job is to scan the given directory and find out how many files(or characters) are in it. Then it creates a number of buttons equal to the total number of files. Currently my problem is that i have no way knowing which button is pressed during run time.\nextends Control\n\n#number of characters to be imported\nvar characters = 0\n#file names of all avalible chararters\nvar files = []\n#buttons associated with said characters\nvar array = []\n\n# Called when the node enters the scene tree for the first time.\nfunc _ready():\n scan_directory()\n populate()\n setup()\n\n#scans for the number of characters in the folder\nfunc scan_directory():\n var current = \"\"\n var dir = DirAccess.open(\"res://Characters/\")\n dir.list_dir_begin()\n current = dir.get_next()\n while current != \"\":\n files.push_back(current)\n current = dir.get_next()\n print(files)\n characters = files.size()\n\nfunc setup():\n var x = 0\n for i in characters:\n var but = array[x]\n var path = \"res://Characters/\" + files[x] + \"/\"+ files[x] + \".gd\"\n var char = load(path)\n var dic = char.import()\n x += 1\n\n#creates buttons equal to the number of characters\nfunc populate():\n var x = 100\n var y = 100\n var texture = Texture2D.new()\n texture = load(\"res://img/button.jpg\")\n\n for i in characters:\n var but = Button.new()\n var path = load(\"res://Scripts/Character_Select.gd\")\n add_child(but)\n but.pressed.connect(self.export)\n but.set_position(Vector2(x,y))\n but.set_button_icon(texture)\n x += 250\n if x > 1250:\n x = 100\n y += 250\n array.push_back(but)\n print(array)\n\nfunc export():\n var temp = self.get_instance_id()\n print(\"Current id: \" + str(temp))\n\nSo im currently looking for a way to either have the button pass a varible when clicked or be able to know the instance id of the button that was clicked so i can compare it to know buttons in the array.\nAny help is appreciated."} +{"id": "000138", "text": "While trying to host my video game made with Godot 4 with Flask, I've ran into a problem-- the game doesn't load, and instead there is a spinning circle.\nThe error messages are as follows:\nGET http://127.0.0.1:5000/game/index.worker.js net::ERR_BLOCKED_BY_RESPONSE 200 (OK) (or 302 when not hard-reloading the page)\nUnchecked runtime.lastError: The message port closed before a response was received.\nand also a repeating set of errors:\nindex.js:14010 still waiting on run dependencies:\nonPrintError @ index.js:14010\n(anonymous) @ index.js:737\nsetInterval (async)\naddRunDependency @ index.js:727\ncreateWasm @ index.js:886\n(anonymous) @ index.js:12933\n(anonymous) @ index.js:14253\nPromise.then (async)\n(anonymous) @ index.js:14251\ndoInit @ index.js:14250\ninit @ index.js:14267\nstartGame @ index.js:14366\n(anonymous) @ (index):224\n(anonymous) @ (index):244\nindex.js:14010 dependency: wasm-instantiate\nonPrintError @ index.js:14010\n(anonymous) @ index.js:739\nsetInterval (async)\naddRunDependency @ index.js:727\ncreateWasm @ index.js:886\n(anonymous) @ index.js:12933\n(anonymous) @ index.js:14253\nPromise.then (async)\n(anonymous) @ index.js:14251\ndoInit @ index.js:14250\ninit @ index.js:14267\nstartGame @ index.js:14366\n(anonymous) @ (index):224\n(anonymous) @ (index):244\nindex.js:14010 (end of list)\n\nThe output from Flask looks like this:\n127.0.0.1 - - [22/Jun/2023 09:37:40] \"GET /game/index.FILE.EXTENSION HTTP/1.1\" 200 - where FILE and EXTENSION are exactly what they sound like.\nThe code for the website is as follows:\nfrom flask import Flask, send_from_directory\n\napp = Flask(__name__)\n\n@app.route('//')\ndef index(game):\n response = send_from_directory(f'./{ game }', 'index.html')\n response.headers.add('Cross-Origin-Opener-Policy', 'same-origin')\n response.headers.add('Cross-Origin-Embedder-Policy', 'require-corp')\n return response\n\n@app.route('//')\ndef game(game, file):\n return send_from_directory(f'./{ game }', file)\n\napp.run()\n\nand the filesystem looks like this:\n\n(it was exported as index.html, but is the same with a proper name)\nAlso, when just opening up the file as a static website, it just states that it needs Cross Origin Isolation, so I can't test it there. The build does run fine on itch.io\nThe Unchecked runtime.lastError message was caused by an extension according to this, so I opened up a different browser without extensions, and that was gone! The others were still there, and I can't find anything on how to fix this."} +{"id": "000139", "text": "Now, the official documentation sadly only reads:\n\nvoid draw_texture_rect_region ( Texture2D texture, Rect2 rect, Rect2 src_rect, Color modulate=Color(1, 1, 1, 1), bool transpose=false, bool clip_uv=true )\n\n\nDraws a textured rectangle region at a given position, optionally modulated by a color. If transpose is true, the texture will have its X and Y coordinates swapped.\n\nI would like to ask what the parameter Rect2 src_rect does.\nI am trying to draw a repeating texture into a CanvasItem (a button) and am trying to use the CanvasItem.draw_texture_rect_region() method to do so.\nSpecifically, my function looks like this:\n# size_in_cells is the amount of grid cells the object is supposed to use.\n\n# cell_size is the side length of the (square) cell\n\n# blockTexture is the CompressedTexture2D I am using as a texture. As it has a native resolution of \n# 64 by 64 pixels I need to scale it to match the cell_size.\n\nfunc _draw():\n for i in range(size_in_cells):\n draw_texture_rect_region(blockTexture, \\\n Rect2(Vector2((self.get_rect().size.x/size_in_cells)*i,0), Vector2(cell_size, cell_size)),\\\n self.get_rect())"} +{"id": "000140", "text": "In this documentation: https://docs.godotengine.org/en/stable/tutorials/shaders/shader_reference/shading_language.html#uniforms I see several examples of how to use hint but I don't know how to set a default value when I use a hint\nI have tried:\nuniform float mask_scale = 1.0 : hint_range(0.1, 10.0);\n\nBut I got error:\n\nExpected expression, found 'HINT_RANGE'"} +{"id": "000141", "text": "I'm making a 2.5D tower defense game with bloons in Godot.\n(All code and scenes are on my github). (Also I fixed my git so the repo is working now)\nMy Bloon scene consists of a Node3D and a Sprite3D for the Bloon, and when I instantiate the Bloon (in code) the Node3D and Sprite3D are in the scene tree, but it doesn't show the sprite.\nI tried changing the sprite after instantiating the scene, but still nothing. I also tried hiding all other objects, but still, nothing. J was expecting that the Bloon sprite would show at 0,0,0 but it didn't."} +{"id": "000142", "text": "So I have a game I'm making and I'm making a dash mechanic. I have a player class that the player nodes (that are character2D nodes) inherit from. I (mostly) made it so the player needs to have a dash bar filled up so they can dash. I need to have a signal that resets the players dash meter, and it will connect, but it won't run the function. (I'm using Godot 4.0)\nCharacter2D Script\nfunc _ready():\n player.remove_dash.connect(_remove_all_dash)\n print(\"is connected: \" + str(player.remove_dash.is_connected(_remove_all_dash)))\n \nfunc _remove_all_dash():\n dash_power = 0\n print(\"function ran\")\n\nPlayer class script\nfunc check_for_dash_input(delta, player_resource: Resource, dash_timer: Timer, input: String, dash_power: int):\n dash(delta, player_resource, dash_timer, dash_power)\n \n if Input.is_action_just_pressed(input):\n dash_timer.start()\n emit_signal(\"remove_dash\")\n print(\"signal emit\")\n\nDebug console:\nis connected: true;\nsignal emit\nI tried looking a the docs for Godot 4.0, but that didn't really help."} +{"id": "000143", "text": "I switched from Godot 3 to Godot 4 and was trying to make a platforming game.\nI need the export(Resource) function, but it no longer exists/functions in the new Godot.\nWhat's the equivalent of the function, but in Godot 4?\n(language is GDScript)"} +{"id": "000144", "text": "I'm creating a game using Godot Engine and I can't figure out how to create a wall to prevent a player from walking out of the screen.\nThe player is able to walk out of the screen window. How do I prevent it from happening?\nI tried to use the CollisionShape2D node but I don't think I know how to use it properly in a way that it works like I want it to."} +{"id": "000145", "text": "I'm currently making a game, and I want my character to shoot projectiles at the enemy. I'm facing an issue where when I click, the bullet doesn't fire. Here's my code so far:\npublic partial class PotatoCannon : Sprite2D\n{\n public Timer cooldown;\n\n [Export]\n public PackedScene boolet { get; set; }\n\n public override void _Ready()\n {\n cooldown = GetNode(\"CannonCooldown\");\n cooldown.Autostart = false;\n }\n public override void _PhysicsProcess(double delta)\n {\n GD.Print(cooldown.TimeLeft);\n LookAt(GetGlobalMousePosition());\n RotationDegrees = Math.Abs(RotationDegrees);\n if (RotationDegrees%360>=90.0 && RotationDegrees%360<=270.0)\n {\n FlipV = true;\n }\n else\n {\n FlipV = false;\n }\n if (Input.IsActionJustPressed(\"shoot\") && cooldown.TimeLeft == 0)\n {\n bullet Bullet = boolet.Instantiate();\n this.AddChild(Bullet);\n cooldown.WaitTime = 1;\n cooldown.Start();\n }\n }\n}\n\noh and btw, the error message is\nE 0:00:49:0157 PotatoCannon.cs:24 @ void PotatoCannon._PhysicsProcess(Double ): System.NullReferenceException: Object reference not set to an instance of an object."} +{"id": "000146", "text": "`extends Node2D\n\nconst SlotClass = preload(\"res://Inventory/Slot.gd\")\n@onready var inventory_slots = $GridContainer\nvar holding_item = null\n\nfunc _ready():\n for inv_slot in inventory_slots.get_children():\n inv_slot.connect(\"gui_input\" , self , \"slot_gui_input\", [inv_slot])\n \nfunc slot_gui_input(event: InputEvent, slot: SlotClass):\n if event is InputEventMouseButton:\n if event.button_index == MOUSE_BUTTON_LEFT & event.pressed:\n if holding_item != null:\n if !slot.item: # Place holding item to slot\n slot.putIntoSlot(holding_item)\n holding_item = null\n else: # Swap holding item with item in slot\n var temp_item = slot.item\n slot.PickFromSlot()`\n temp_item.global_position = event.global_position\n slot .putIntoSlot(holding_item)\n holding_item = temp_item\n elif slot.item:\n holding_item = slot.item\n slot.pickFromSlot()\n holding_item.global_position = get_global_mouse_position()\nfunc _input(event):\n if holding_item:\n holding_item.global_position = get_global_mouse_position()`\n\nI was following a tutorial for an inventory, the tutorial used Godot 3. whilst I use Godot 4.\nThis is the code that is giving the error ( < inv_slot.connect(\"gui_input\" , self , \"slot_gui_input\", [inv_slot]) >, is giving the error)\nError\n\nLine 9: Too many arguments for \"connect()\" call. Expected at most 3 but received 4.\nLine 9: Invalid argument for \"connect()\" function: argument 2 should be \"Callable\" but is \"res://Inventory/Item/Item.gd\".\nLine 9:Cannot pass a value of type \"String\" as \"int\".\nLine 9: Invalid argument for \"connect()\" function: argument 3 should be \"int\" but is \"String\"."} +{"id": "000147", "text": "I have exported the web version of my Godot 4, it is in a folder on my local drive.\nBecause of the SharedArrayBuffer dependency I can not just double-click in the index.html file. If I do so I see this error:\n\nError The following features required to run Godot projects on the Web\nare missing: Cross Origin Isolation - Check web server configuration\n(send correct headers) SharedArrayBuffer - Check web server\nconfiguration (send correct headers)\n\nHow can I run it in local?"} +{"id": "000148", "text": "in gdscript (using godot 4.1.1 stable) i am working on a function that adds a checkbox dynamically works fine\nbut when i try to get it to do something when it is toggled i get all sorts of errors this is my small function\nfunc _on_add_left_button_pressed():\n var checkbox = CheckBox.new()\n var text = houseNum.text\n leftList.add_child(checkbox)\n checkbox.text = text\n checkbox.connect(\"toggled\", self, \"_on_checkbox_toggled\")\n\ncurrently i have the \"_on_checkbox_toggled\" defined above but it dosent do anything at the moment just has a print statement in it\nthe checkbox.connect line gives me an error that says:\n\nInvalid argument for \"connect()\" function: argument 2 should be \"Callable\" but is \"res://main.gd\"\n\nany help would be much appreciated"} +{"id": "000149", "text": "I had the following code inside an all_tween_completed signal:\nfunc set_tween():\n if move_range == 0:\n return\n move_range *= -1\n var move_tween = get_tree().create_tween()\n move_tween.tween_property(self, \"position:x\",\n position.x + move_range,\n move_speed) \\\n .from(position.x) \\\n .set_trans(Tween.TRANS_QUAD) \\\n .set_ease(Tween.EASE_IN_OUT)\n\nSo that after everything moved I swap sign and move to the other side. However, in Godot 4 there is no such signal. I already changed the code to accommodate for the new syntax but I am not sure how can I reproduce the previous behaviour. Do I need to keep calling and emitting the finished signal from Tween?"} +{"id": "000150", "text": "I want to connect the pressed signal of 9 buttons by code but since the .connect method doesn\u2019t use a string argument (which I could easily modify in my loop) I don\u2019t know how to connect each of them.\nI want to do something like this:\nfor i in buttonArray:\n get_node(\"button%d\" %i).pressed.connect(\"on_pressed_button_%d\" %i)\n\nExpecting it will connect the 9 pressed signals to my 9 on_pressed_button function\nI found a post for an older Godot version that use only one \"on_pressed\" function for the whole list and get the button in the parameter.\nfor button in get_tree().get_nodes_in_group(\"my_buttons\"):\n button.connect(\"pressed\", self, \"_some_button_pressed\", [button])\n\nfunc _some_button_pressed(button):\n print(button.name)\n\nIs something like this still possible in the actual 4.1 version ?"} +{"id": "000151", "text": "I need to use this script written in godot 3 to work in godot 4\nlanguage is gdscript\nextends Path2D\n\nenum ANIMATION_TYPE {\n LOOP,\n BOUNCE,\n}\n\nexport(ANIMATION_TYPE) var animation_type\n\nonready var animationPlayer: = $AnimationPlayer\n\nfunc _ready():\n match animation_type:\n ANIMATION_TYPE.LOOP: animationPlayer.play(\"MoveAlongPathLoop\")\n ANIMATION_TYPE.BOUNCE: animationPlayer.play(\"MoveAlongPathBounce\")"} +{"id": "000152", "text": "I exported the godot project to HTML5 and after uploading to itch.io I got the error Error The following features required to run Godot projects on the Web are missing: Cross Origin Isolation - Check web server configuration (send correct headers) SharedArrayBuffer - Check web server configuration (send correct headers)\nI tried to export the project with different settings but still got 1 result."} +{"id": "000153", "text": "I am just starting out Godot and I need a little help with some platforming code written in Godot 4\nHow do I set a default value to this ->\n@export var speed:int: set = set_speed\nThe language being used in the code is gdscript"} +{"id": "000154", "text": "I am working through the \"Squash the Creeps\" 3D game tutorial in Godot and I have found that when the player character uses the move_and_slide() method in order to initiate a landing movement on top of an enemy collision mesh that it sometimes increases the score by more than 1 even if landing on only a single enemy. It appears from the debugger that calling get_slide_collision_count() as suggested in the tutorial is indicating that there was more than one collision in some instances, and iterating over the indices of the collision array (KinematicCollision3D) shows that there are multiple collisions with the same colliding object (collision.get_collider()) at the same position (collision.get_position())\nMy questions is whether or not this is expected behavior? I can understand that the \"move_and_slide()\" method for character bodies is intended to be executed to take care of multiple subsequent collisions to ensure flush movement, but is it expected that it would lead to multiple reported collisions with the same object in the same single call to move_and_slide()?\nIf so, I would imagine it is standard practice when using the collisions reported by move_and_slide() that the developer programs around that and ensures that when iterating over the collisions that any subsequent collisions in the array with that instance are ignored?\nIf not, what am I doing wrong here to generate more than one collision with the same instance? It seems like this would be something that would be noticed by SOMEONE else who is running through this tutorial but a lot of searching has left me empty handed.\n(Godot 4.1.1 btw)\nThe following is called in the _physics_process(delta) method:\nfor index in range(get_slide_collision_count()):\n var collision = get_slide_collision(index)\n var collider = collision.get_collider()\n var collision_position = collision.get_position()\n if (collider == null):\n continue\n if (collider.is_in_group(\"mob\")):\n if Vector3.UP.dot(collision.get_normal()) > 0.1:\n collider.squash() # This increments the score\n target_velocity.y = bounce_impulse\n\ncollider.squash is being called sometimes 2-3 times instead of once even if only one enemy is squashed in terms of how many enemies the player is actually landing on. Debugger shows more than one collision with the same collider at the same collision_position in a single call to get_slide_collision(). I would expect every collision in the KinematicCollision3D object to be with a unique collider instance, not multiple times with the same object instance."} +{"id": "000155", "text": "I am using the ConfigFile to save all the game related data into the user system.\nThere are two methods, one for saving the data and other one for getting the data.\nFunction to save the data:\nfunc _set_data(filename: String, section: String, field: String, value):\n var config_file = ConfigFile.new()\n \n config_file.set_value(section, field, value)\n \n config_file.save(filename)\n\nFunction to get the data:\nfunc _get_data(filename: String, section: String, field: String, default_value):\n var config_file = ConfigFile.new()\n\n config_file.load(filename)\n \n var result = config_file.get_value(section, field)\n\n if result == null:\n config_file.set_value(section, field, default_value)\n \n result = default_value\n \n config_file.save(filename)\n \n return result\n\nAbstract functions to use these private functions outside the file indirectly:\nfunc set_world(value: int):\n _set_data(app_config_file, world_section, world, value)\n\nfunc get_world():\n return _get_data(app_config_file, world_section, world, 1)\n\nfunc set_level(value: int):\n _set_data(app_config_file, level_section, level, value)\n\nfunc get_level():\n return _get_data(app_config_file, level_section, level, 1)\n\n//config file and section are basically strings\n\nThere is no issue in using these alone(for single data), but having issue in saving multiple sections at once!\nFor eg:\n set_level(2)\n \n set_world(4)\n \n print(get_level())\n print(get_world())\n\nExpected output:\n2\n4\n\nActual output:\n1\n4\n\nSame if I am calling save_world before and save_level after.\nResearch done:\nWhenever I am calling another function which saved other data, then it is removing the data related to the first 'set' function called."} +{"id": "000156", "text": "I was trying to make a set of Area2Ds that would display a value when the mouse is hovering over them. They're children of a control node (technically children of buttons in that node, the exact order is Control > Panel > Panel > Vbox > Hbox > Button > Area2D, yes the two panels are not a typo, no I'm not good at UI design). The control node has it's own scene where I edit it, and when I run just that scene everything works fine. The issue arises when I bring it into my main scene, everything loads correctly, but the Area2Ds no longer work. I have absolutely no idea why this is.\nI've checked that the collision layers of the Area2Ds and the mouse tracker aren't somehow changed, I've confirmed that the Area2Ds aren't being deleted or moved out of place or having their collision shapes changed, I've tried changing the Mouse Filter setting in the control node since I saw someone with a (seemingly) similar issue who was able to solve it that way. No luck\nAny help on figuring this out would be massively appreciated, thanks.\nAlternatively, if there's a way for buttons to send a signal when the mouse is hovering over them that would also be useful."} +{"id": "000157", "text": "Godot doesn't support static signals, so I tried two approaches:\nEmpty Signal:\nstatic var my_signal: Signal = Signal()\n\nCustom type:\nstatic var my_signal: StaticSignal = StaticSignal.new()\n\nclass_name StaticSignal\n\nvar _callables: Dictionary\n\nfunc connect(callable: Callable) -> void:\n self._callables[callable] = true\n\nfunc disconnect(callable: Callable) -> void:\n self._callables.erase(callable)\n\nfunc emit(data: Variant) -> void:\n for callable in self._callables:\n callable(data)\n\nThe problem is when it comes to .connecting to the signal. With StaticSignal I am getting a .connect member not found error. With Signal I am getting an ignored runtime error.\nstatic func _static_init():\n ChatLog.new_message.connect(func(msg):\n if len(_log) >= LIMIT:\n _log.remove_at(0)\n _log.append(msg))"} +{"id": "000158", "text": "I have a area node in godot4 which can scale up and down and also rotate. there are celling, floor and wall in scene, my area node place between them and can scale up, when this area got bigger can collide with other areas like floor, wall or celling. I want to know that area i collided is wall, floor or celling.\nI know there is characterbody node which has method that shows node which is collided is wall, floor or ceiling. but I want to use area node for both of them. how can I find node which is collided is wall, celling or floor?\nThanks in advance.\nthis is my code:\nfunc my_raycast(myray:RayCast3D,myarea:Area3D):\nmyray.global_position = center_pivot.global_position\nmyray.enabled = true\nmyray.target_position = myarea.global_position - myray.global_position\nif(myray.is_colliding()):\n if(myray.get_collider().name==myarea.name):\n #myray.enabled = false\n print(\"colided\")\n \n var mynormal = myray.get_collision_normal()\n print(myray.get_collision_normal())\n \n if(mynormal.is_zero_approx()):\n mynormal = -myray.global_position.direction_to(myray.to_global(myray.target_position))\n \n var is_floor:bool = mynormal.angle_to(myarea.transform.basis.y/myarea.scale.y) <= PI/4\n var is_ceiling:bool = mynormal.angle_to(myarea.transform.basis.y/myarea.scale.y) <= PI/4\n var is_wall:bool = mynormal.angle_to(myarea.transform.basis.y/myarea.scale.y) <= PI/4\n\n===============\nnever condition of is_zero_approx got true.\nI used raycast for finding normal of node I collided but I don't have any idea, how to find answer? I don't want to use layer or tags on wall, floor and celling @Theorat"} +{"id": "000159", "text": "I have a scene that consists of one parent and several children, which are all Sprite2Ds, like \"Body\", \"Clothes\", \"Armor\", etc. I want to be able to refer to the parent as if it was a single sprite that looks like the composition of all the children sprites. So, for instance, I would like to be able to set the flip_h on the parent and have that flip all of the children. I would like to be able to refer to the texture of the parent, and have that be the image that the scene looks like, i.e., all of the sprites layered on top of each other.\nI was expecting this to work just by making the parent a \"Sprite2D\" and adding the child sprites, but then the properties just refer to the empty texture of the parent sprite. In other posts I have seen people recommending stitching the images together in GIMP or something, but that is not going to work for me--I need control over all the behavior of all the individual layers of the overall sprite, but I want to be able to refer to the resulting sprite as if it was a single image.\nI have also seen recommendations to add a camera/viewport and grab the displayed image that way, but I don't think this is a good idea in my case because I will have hundreds of these scenes and I want it to be as performant as possible."} +{"id": "000160", "text": "(new here) Following a tutorial, this is a platformer game and they made a system where the level.gd can work on all levels although that doesn't work in my case the player doesn't collide with deathzone or ending collision shapes. so it doesn't emit the signals and basically, my game becomes unplayable, I thought it was an engine error so I made this project in Godot 4.0.3 but even now loading in 4.1.1 the problem is the same maybe its something as simple as doing something from inspector tab but I don't know\nheres my level.gd\n# same script is used on all my lvls \nextends Node2D\n\n@export var next_level: PackedScene = null\n\n@onready var start = $Start\n@onready var exit = $Exit\n@onready var deathzone = $deathzone\nvar player = null\n\nfunc _ready():\n \n player = get_tree().get_first_node_in_group(\"player\")\n if player!=null:\n player.global_position = start.get_spawn_pos()\n var traps = get_tree().get_nodes_in_group(\"traps\")\n for trap in traps:\n # trap.connect(\"touched_player\", _on_trap_touched_player)\n trap.touched_player.connect(_on_trap_touched_player)\n \n exit.body_entered.connect(on_exit_body_entered)\n# deathzone.body_entered.connect(_on_deathzone_body_entered)\n# deathzone.connect(\"body_entered\", _on_deathzone_body_entered)\n deathzone.body_entered.connect(_on_deathzone_body_entered)\n\nfunc _process(delta):\n if Input.is_action_just_pressed(\"quit\"):\n get_tree().quit()\n elif Input.is_action_just_pressed(\"reset\"):\n get_tree().reload_current_scene()\n\n\nfunc _on_trap_touched_player():\n reset_player()\n\nfunc _on_deathzone_body_enetered():\n reset_player()\n\nfunc reset_player():\n player.velocity = Vector2.ZERO\n player.global_position = start.get_spawn_pos()\n\nfunc on_exit_body_entered(body):\n if next_level != null:\n if body is Player:\n exit.animate()\n player.active = false\n await get_tree().create_timer(1.5).timeout\n get_tree().change_scene_to_packed(next_level)\n\nfunc _on_deathzone_body_entered(body):\n reset_player()\n\n\nmain LVL NODE OREDR\nLVL2 Node order\nI did try to reloading project or making the simple death zone scene again but its always like from which lvl I create the deathzone it works, so instead of saving brach as a scene I created its scene separately so now it works in other lvls but doesn't work in main lvl on other hand the exit part only work for main scene the other spike trap and saw traps are not working on other scenes as well\none thing that I noticed that all the nodes that are not working are all area2d or have area 2d inside their scene (obv) since it looks like a collision problem\nany info on this would be appreciated"} +{"id": "000161", "text": "I'm currently following this tutorial / resource: https://github.com/amzker/Gsheet_Godot\nThis outputs data sent from a Godot program to a sheet that looks like this,\n\n\n\n\nGreekAlphabet\nWhatHungryDogDo\nBananasGoodFor\nHowManyWords\n\n\n\n\nAlpha\nThe\nBananas\nThese\n\n\nBeta\nHungry\nAre\nAre\n\n\nDelta\nDog\nNice\nFour\n\n\nGamma\nEats\nSnacks\nWords\n\n\n\n\nMy goal is to upload data to a Google Sheet from Godot in a format that resembles this,\n\n\n\n\n\n\n\n\n\n\n\n\n\nGreekAlphabet\nAlpha\nBeta\nDelta\nGamma\n\n\nWhatHungryDogDo\nThe\nHungry\nDog\nEats\n\n\nBananasGoodFor\nBananas\nAre\nNice\nSnacks\n\n\nHowManyWords\nThese\nAre\nFour\nWords\n\n\n\n\nThe relevant apps script code is as follows:\nfunction json(sheetName) {\n const spreadsheet = SpreadsheetApp.openById(\"1h_KlXz9IWt2MtQQWYUSk4FJbIr02MbfXWU3ZRqY7U3I\") //CHANGE WITH YOUR SHEET ID ( see url of you sheet d/)\n const sheet = spreadsheet.getSheetByName(sheetName)\n const data = sheet.getDataRange().getValues()\n const jsonData = convertToJson(data)\n return ContentService\n .createTextOutput(JSON.stringify(jsonData))\n .setMimeType(ContentService.MimeType.JSON)\n}\nfunction convertToJson(data) {\n const headers = data[0]\n const raw_data = data.slice(1,)\n let json = []\n raw_data.forEach(d => {\n let object = {}\n for (let i = 0; i < headers.length; i++) {\n object[headers[i]] = d[i]\n }\n json.push(object)\n });\n return json\n}\nfunction doGet(params) {\n const sheetname = params.parameter.sheetname\n return json(sheetname)\n}\n\nfunction doPost(params) {\n const datee = params.parameter.date\n const timee = params.parameter.time\n const catee = params.parameter.cate\n const amounte = params.parameter.amount\n const desce = params.parameter.desc\n const sheetname = params.parameter.sheetname\n\n \n if(typeof params !== 'undefined')\n Logger.log(params.parameter);\n\n var ss = SpreadsheetApp.openById(\"1h_KlXz9IWt2MtQQWYUSk4FJbIr02MbfXWU3ZRqY7U3I\") //CHANGE WITH YOUR SHEET ID ( see url of you sheet d/)\n var sheet = ss.getSheetByName(sheetname)\n var Rowtoenter = sheet.getLastRow()+1\n sheet.appendRow([datee,timee,catee,amounte,desce])\n \n\n/* \n var datecol = sheet.getRange(Rowtoenter,1)\n var timecol = sheet.getRange(Rowtoenter,2)\n var catecol = sheet.getRange(Rowtoenter,3)\n var amountcol = sheet.getRange(Rowtoenter,4)\n var descol = sheet.getRange(Rowtoenter,5)\n \n datecol.setValue(datee)\n timecol.setValue(timee)\n catecol.setValue(catee)\n amountcol.setValue(amounte)\n descol.setValue(desce)\n*/\n \n}\n\nWhen the data leaves Godot, it's in the form of a URL with a string of parameters appended.\n\ndate=TODAY&time=NOW&cate=CATE&amount=AMOUNT&desc=DESC&flibble=SPONDS&sheetname=Sheet1\n\nOnce it reaches Sheets, it's converted into the first kind of table, where I really want it to appear as the second kind of table. =/\nI'm trying to wade through the source and read through the Docs (https://developers.google.com/apps-script/reference/spreadsheet/sheet) but I'm very new to working with JSON and I'm finding it hard to get a starting point.\nAny thoughts appreciated!\n(https://i.sstatic.net/EY0Yl.png)\n(https://i.sstatic.net/kASx9.png)"} +{"id": "000162", "text": "Consider a control that behaves as a decorator to user provided controls, like a window frame. I want my control to have all the common logic of the window (title bar, draggable window borders, buttons to hide the window, etc) and any time it's instanced in the main scene I want it to \"eat\" any of its node children and place them into a container of my choice.\nThis is the control I made, and the LinesContainer container is where I want any of its children to reside:\n \nAnd just to be absolutely clear what I mean, when it's instantiated into a scene as below, I want its children (the label, in this case) to behave as if they were children of the LinesContainer node instead:\n\nIf you are familiar with .Net XAML at all, this is what the ContentPresenter tag does in a control, it \"eats\" the Content property of the entire control (ie, the children of the control instance, as above) and displays it inside that tag, allowing me to create anything I need around it (or behind it, or over it, etc).\nIs there anything built-in like ContentPresenter? Or if not, how would I go about making something of my own? If possible, that also works correctly in the editor, allowing me to add and remove items as I need and have them layout correctly."} +{"id": "000163", "text": "My buttons are displayed (show method) after a short pause, and if I hover the mouse cursor over the position where the button will be and leave it, the button will not realize that it is hovered. This is fixed when moving the mouse, but I really want to fix this little problem.\nI used base signals mouse_entered() and mouse_exited()\nAfter show() buttons must understand is mouse in or not. That's all("} +{"id": "000164", "text": "I am making a third person game in godot4, and my third person camera has been flipping around when I drag my mouse up. How do I make it so that it sticks there and not allow it to flip over like other third person games?\nMy code right now (for player movement and camera):\nextends RigidBody3D\n\n@onready var horizontal_pivit = $HorizontalPivit\n@onready var vertical_pivit = $HorizontalPivit/VerticalPivit\n\nvar mouse_sensitivity := 0.001\nvar horizontal_input := 0.0\nvar vertical_input := 0.0\n\nfunc _ready() -> void:\n Input.set_mouse_mode(Input.MOUSE_MODE_CAPTURED)\n\n\nfunc _process(delta) -> void:\n var input := Vector3.ZERO\n input.x = Input.get_axis(\"left\", \"right\")\n input.z = Input.get_axis(\"forward\", \"backward\")\n \n apply_central_force(input * 1200.0 * delta)\n \n horizontal_pivit.rotate_y(horizontal_input)\n vertical_pivit.rotate_x(vertical_input)\n\n\n\nfunc _unhandled_input(event: InputEvent) -> void:\n if event is InputEventMouseMotion:\n if Input.get_mouse_mode() == Input.MOUSE_MODE_CAPTURED:\n horizontal_input = -event.relative.x * mouse_sensitivity\n vertical_input = -event.relative.y * mouse_sensitivity\n\nmy player scene:"} +{"id": "000165", "text": "I wrote Python code on a Raspberry Pi that sends data when the sound sensor is triggered. when I connect to it using Python script everything works fine but for some reason Godot 4 doesn't recognize buttons pressed through t keyboard.press_and_release() so I thought about making the connection directly in Godot's gdscript. For some reason, the status is always connecting even when the serverside says it is connected.\nI'm using WebSocketPeer in GDScript to connect to the URL. if there are any alternatives I would love some advice or maybe just a way to register the responses received by the py script as Godot input\nSolution\ni created a global file that connects to the pi server when the game starts:\nextends Node\n\nvar players = []\nvar scene\nconst port = 8880\nvar ip = \"192.168.4.1\"\nvar connection \n\nfunc _ready():\n connection = StreamPeerTCP.new()\n connection.connect_to_host( ip,port )\n\nthen I just use the responses and simulate buttons where I need:\nfunc _process(delta):\n GameData.connection.poll()\n \n var state = GameData.connection.get_status()\n \n if GameData.connection and state == GameData.connection.STATUS_CONNECTED:\n if GameData.connection.get_available_bytes() > 0:\n var s = GameData.connection.get_utf8_string(GameData.connection.get_available_bytes())\n print(str(s))\n if str(s) == \"1\":\n Input.action_press(\"1\")\n elif str(s) == \" \":\n Input.action_press(\"space\")"} +{"id": "000166", "text": "I have been following this tutorial on creating a Character Movement System where I ran into a problem related to signals. I was trying to have Spikes Node2D with an Area2D child send a signal when a body entered its CollisionShape2D. I was successfully able to connect the signal to the Spikes Node2D and I Ctrl + C the receiver method then added it to Spike Script.\nprivate void _on_area_2d_body_entered()\n{\n GD.Print(\"Body has entered\");\n}\n\nOn VS, it says that there is 1 reference to the receiver method, though I cant seem to access the code where it is called. That being said, whenever my Player CharacterBody2D passes through the Spikes' Sprite2D, which is perfectly aligned with the CollisionShape2D, I get the following debug error:\nemit_signalp: Error calling from signal 'body_entered' to callable:\n'Node2D(SpikeTrap.cs)::_on_area_2d_body_entered': Method not found.\n\nI followed the tutorial, and the guy added object body as parameters in the method. When I did the same, I got no references to my method. I have no clue what is going wrong here.\nI watched that part in the video multiple times, and I searched through google and the Godot documentation to no avail."} +{"id": "000167", "text": "i make loop, but it stop main loop. Help plz\nextends Area2D\n\n\n@onready var sprite = $Sprite\n@onready var audio = $Audio\n@onready var body = $Body\nvar rock = false\n\n\nfunc disable_stone(player, stone_thread):\n stone_thread.start(await disable(player, stone_thread))\n\nfunc disable(player, stone_thread):\n if !rock:\n print(\"super\")\n rock = true\n await get_tree().create_timer(0.5).timeout\n body.colbox.disabled = true\n sprite.modulate.a8 = 100\n audio.play()\n await get_tree().create_timer(2).timeout\n while !(player in get_overlapping_areas()): pass\n body.colbox.disabled = false\n sprite.modulate.a8 = 255\n rock = false\n stone_thread.wait_to_finish()\n\n\nI spawn thread. I dont know how fix that. I trying all thats i know."} +{"id": "000168", "text": "For a game I'm working on, I wanted to make a ring of cannons, and attach to these cannons a node that controls their logic (e.g, what cannons shoot when, what they fire, etc). I want this node to be extremely flexible, so that all I have to do is write a new function for the logic in some sub node, and then when the subnode is ready it will emit a signal, passing a list of all of the functions created up to the central node, which will then decide which function to use based on properties, user input etc.\nIn order to ensure correctness when creating each function, I was hoping to create some sort of \"prototype\", which basically dictates that all functions in a node needs to follow a specific set of rules, e.g, they must all take an arg of a list of Cannon objects, and an index, and all must return a list of Cannon objects. I was wondering if there is anything within the Godot framework that would allow for this. Effectively an interface, but for multiple functions within a node.\nAn interface would also allow me to more easily call references to the function in the parent node, since I don't have to worry about ensuring the correctness of my calls.\nThe closest thing I have found to this would be using a class with nodes, but that requires creating a new node for every function, which sounds messier than just having one central script that contains them all."} +{"id": "000169", "text": "I want to fill my array with loaded scenes of rooms in godot 4.1.1. So i created func to check whether they exist or not. If they exist then append to array, if not exist then break the loop. So when i run script i get error in debugger like this scene does not exist and failed to loading. Is it good? or there are any other ways to check whether scene exist or not? Im new to godot\nfunc get_room_array():\n var i = 1\n while true:\n if load(get_room_path(i)) != null:\n room_array.append(load(get_room_path(i)))\n else:\n break\n i+=1"} +{"id": "000170", "text": "Within Godot, I am attempting to create a function that would basically dynamically create a property list based on the children nodes (to allow maximum composition).\nCurrently though, I am stuck on one single issue. When I updated the _get_property_list, I don't want to have to hard code a variable to store an int for an enum for every single node. Instead, I want the variables for each node to be created dynamically, i.e., I don't know what variables I will be using within the class until I run the editor.\nI tried making a dictionary that will hold all of the variables, but when I overrided the _get and _set funcs as follows:\nfunc _get(key):\nreturn propertyHolder[key]\nfunc _set(key, value):\npropertyHolder[key] = value\nI got never ending errors, which tells me that I've gotten something wrong. I couldn't find any _get_variable_list funcs, like the _get_method_list, and so I am stuck. I also considered each node holding their own variable locally and passing it to the parent, but I ultimately ran into the same issue that since the number of nodes is dynamic, I somehow need to dynamically receive these nodes in the parent class.\nAny advice would be much appreciated."} +{"id": "000171", "text": "Following this tutorial (its a bit back in Godot 3) and I have run into some issues with PhysicasRayQueryParameters2D. Here is the code from the video:\nif (ableToShoot)\n{\n var spaceState = GetWorld2D().DirectSpaceState;\n Godot.Collections.Dictionary result = spaceState.IntersectRay(this.Position, player.Position, new Godot.Collections.Array {this} );\n\n if (result != null)\n {\n if (result.Contains(\"collider\"))\n {\n if (result[\"collider\"] == player)\n {\n GD.Print(\"Shooting\");\n ableToShoot = false;\n shootTimer = shootTimerReset;\n }\n }\n }\n}\n\n\nAnd here is my code:\nif (ableToShoot)\n{\n var spaceState = GetWorld2D().DirectSpaceState;\n //Change 1\n var query = PhysicsRayQueryParameters2D.Create(this.Position, player.Position);\n query.Exclude = new Godot.Collections.Array { };\n query.CollisionMask = 1;\n Godot.Collections.Dictionary result = spaceState.IntersectRay(query);\n\n if (result != null)\n {\n if (result.ContainsKey(\"collider\")) //Change 2\n {\n if (result[\"collider\"] == player) //Error Message\n {\n GD.Print(\"Shooting\");\n ableToShoot = false;\n shootTimer = shootTimerReset;\n }\n }\n }\n\nThe code from the video is obviously different due to the changes form Godot 3 to 4. However, I am not entirely sure if the corrections/changes I have made are the correct ones. I am still new to both C# and Godot. I marked the two places I made changes so that it is easier to compare. My main issue however is that I keep getting an error message where it says result[\"collider\"] == player.\nThat is how it was in the tutorial, and I did it myself but I get an error when I do, saying that I cannot use the '==' opperators when applyed to a Variant and PlayerController(a class from a different script). I tried casting result into a PlayerController, but once the tile's started blocking between me and the enemy I started getting error messages saying that I cannot convert Godot.TileMap to Godot.PlayerController. What should I do instead? And are there any other mistakes I have made when translating the code?\nI looked over many reddit forms and the Godot documentation but I could not get over that last error."} +{"id": "000172", "text": "Hey I am trying to make a 2D platformer with sprite animations, but I keep getting this error \"Invalid get index 'flip'(on base: 'AnimatedSprite2D')\". This is my code, does anyone have a idea of what im doing wrong?\nI was expecting the player to move to the left without the game chrasing.\nextends CharacterBody2D\n\nconst SPEED = 300.0\nconst JUMP_VELOCITY = -400.0\n\nvar gravity = ProjectSettings.get_setting(\"physics/2d/default_gravity\")\n@onready var anim = get_node(\"AnimationPlayer\")\n\nfunc _physics_process(delta):\nif not is_on_floor():\nvelocity.y += gravity * delta\n\nif Input.is_action_just_pressed(\"ui_accept\") and is_on_floor():\n velocity.y = JUMP_VELOCITY\n anim.play(\"Jump\")\n\nvar direction = Input.get_axis(\"ui_left\", \"ui_right\")\nprint(direction)\nif direction == -1:\n get_node(\"AnimatedSprite2D\").flip.h = true\nelif direction == 1:\n get_node(\"AnimatedSprite2D\").flip.h = false\n \nif direction:\n velocity.x = direction * SPEED\n if velocity.y == 0:\n anim.play(\"Run\")\n \n velocity.x = move_toward(velocity.x, 0, SPEED)\n if velocity.y == 0:\n anim.play(\"Idle\")\n \n if velocity.y > 0:\n anim.play(\"Fall\")\n\nmove_and_slide()"} +{"id": "000173", "text": "I am using Godot 4.1. I want to add a property via a script export to my node3d that has translation and rotation tools in the inspector, exactly like a Node3D's existing Transform property.\nThis is what I want to achieve:\n\nPrecisely, I want an X, Y, Z position field, followed by a X, Y, Z Rotation field (ideally with the little sliders too). I would not mind also having the Scale, Rotation Edit Mode, etc either, if it makes the solution easy to achieve.\nI have tried adding a Transform3D, but that exposes a transformation matrix, which is not the desired control:\n\n\nHow can the desired controls be achieved?"} +{"id": "000174", "text": "I am new to Godot 4, I am transferring from Unity. In my game the player controls a spaceship, which is followed by a camera. After getting the input, I update the player's velocity and rotation. However, I do not like when the camera mirrors exactly the player's rotation so I added a lerp in the player's script to control the camera's rotation. The player setup looks like this:\nPlayer\n Marker3D\n Camera\n\n\nAnd the lines of code in the player's script that move and rotate the camera:\nMarker3D.position = Marker3D.position.lerp(playerbody.position,.5)\nMarker3D.rotation = Marker3D.rotation.lerp(playerbody.rotation,.3)\n\nThis works great to make the camera follow the player, except when the player turns 180\u00b0 in any direction, when the camera jumps to the current player's current rotation instead of smoothly transitioning. Is there any way to fix this? Thanks!"} +{"id": "000175", "text": "I used ChatGPT to generate a script as I cannot code very well yet\nit does not know Godot 4 even exists yet so it gives me Godot 3 code,\nI fixed a lot of things that don't work in Godot 4 and found newer equivalents that will work but this is the one thing I can't figure out at all.\nThis is the snippet of my script I believe is broken:\nextends CharacterBody3D\n\n# Structure to represent an inventory slot\nclass InventorySlot:\n var background: TextureRect\n var item_icon: TextureRect\n\n # Constructor for InventorySlot\n func _init():\n background = TextureRect.new()\n item_icon = TextureRect.new()\n\n# Container for inventory slots\nclass InventorySlotContainer(Node):\n var inventory_slots: Array[InventorySlot]\n\n # Constructor for InventorySlotContainer\n func _init():\n inventory_slots = []\n\n # Function to add an inventory slot\n func add_inventory_slot():\n var slot = InventorySlot.new()\n inventory_slots.append(slot)\n add_child(slot.background)\n add_child(slot.item_icon)\n\n# Declare the inventory container\nvar inventory_container: InventorySlotContainer\n\n@onready var inventory_slot_container = InventorySlotContainer.new()\n\n# Declare the inventory slots\nvar inventory_slots: Array[InventorySlot]\n\n\nI have tried changing lots of things but to no avail\nI get these errors:\nLine 14:Expected \":\" after class declaration.\nLine 14:Unexpected \"(\" in class body.\nLine 15:Unexpected \"Indent\" in class body.\nLine 29:Expected end of file."} +{"id": "000176", "text": "In my project, I have an Area2D. I want to perform some actions if a CharacterBody2D I created overlaps it.\nI am thinking of using the body_entered signal to do this. I connected the signal, but how do I determine which character body triggered the signal? I have multiple character bodys in the scene and only one can trigger this action on overlap."} +{"id": "000177", "text": "Godot's documentation of its SizeFlags mentions for the three \"shrink-like\" flags SIZE_SHRINK_BEGIN, SIZE_SHRINK_CENTER, and SIZE_SHRINK_END:\n\nIt is mutually exclusive with SIZE_FILL and other shrink size flags, but can be used with SIZE_EXPAND in some containers.\n\nCombining \"shrink\" with \"expand\" sounds counter-intuitive.\nWhat would be an example of such a use case, i.e., which containers do allow this combination and is there a common semantic what \"shrink and expand\" should mean?"} +{"id": "000178", "text": "I am writing a plugin for custom import assets from OS. I pick assets on disk and copy them to Godot project structure. Then I need to trigger generating .import file for the imported images, but I cannot see any way to do it from code.\nI know there is a way of writing own import plugin and import assets from Godot IDE but I prefer to do the task by standalone separated plugin.\nFound EditorImportPlugin class but it does not do what I am looking for."} +{"id": "000179", "text": "I'm trying to animate a 2D character using a blendspace2D in an animation tree. I've built a basic state machine where, when the charge input is detected while the directional movement is not (0, 0), the state changes to the DASH state and remains there until the charge button is released, with the animation state changing to the Dash animation blendspace2D. However, while in the DASH state, the animation doesn't update if I change directions, e.g., if I dash left and then move right while dashing, the animation doesn't change so the character goes backwards. While not in the DASH state, I have a similar blendspace2D for normal movement, which updates the animations correctly. My code to manage the player character, including the state machine and blendspaces is below.\nextends CharacterBody2D\nclass_name Player\n\n@export var ACCELERATION = 17\n@export var MAX_SPEED = 100\n@export var FRICTION = 100\n@export var DASH_SPEED = MAX_SPEED * 1.5\nvar direction = Vector2.ZERO\nenum{\n MOVE,\n DASH,\n CHARGE,\n ATTACK\n}\nvar state = MOVE\n\n\n@onready var animation_player = $AnimationPlayer\n@onready var animation_tree = $AnimationTree\n@onready var animation_state = animation_tree.get(\"parameters/playback\")\n@onready var sprite_2d = $Sprite2D\n#@onready var animation_state = animation_tree.get(\"parameters/playback\")\n#var direction_changed = false\n\nvar anim_dir = Vector2.ZERO \n \nfunc _ready():\n animation_tree.active = true\n\nfunc _process(_delta):\n direction = Vector2.ZERO\n\n anim_dir.x = Input.get_axis(\"move_left\", \"move_right\")\n \n anim_dir.y = Input.get_axis(\"move_up\", \"move_down\")\n direction = anim_dir.normalized()\n \n\nfunc _physics_process(delta):\n\n match state:\n MOVE:\n move_state(delta)\n DASH:\n dash_state(delta)\n CHARGE: \n charge_state(delta)\n ATTACK:\n attack_state(delta)\n \nfunc move_state(delta):\n\n if direction.length()!= 0:\n\n if anim_dir.x !=0:\n set_animation_parameters()\n\n velocity = velocity.move_toward(direction * MAX_SPEED, ACCELERATION)\n move_and_slide()\n animation_state.travel(\"Move\")\n if Input.is_action_just_pressed(\"charge\"):\n state = DASH\n \n else:\n velocity = velocity.move_toward(Vector2.ZERO, FRICTION) \n if Input.is_action_just_pressed(\"charge\"):\n state = CHARGE\n \n \n animation_state.travel(\"Idle\")\n\n \n\nfunc set_animation_parameters():\n animation_tree.set(\"parameters/Idle/blend_position\", anim_dir)\n animation_tree.set(\"parameters/Move/blend_position\", anim_dir)\n animation_tree.set(\"parameters/Dash/blend_position\", anim_dir)\n \nfunc dash_state(delta):\n if direction.length()>0:\n if anim_dir.x != 0:\n set_animation_parameters()\n velocity = velocity.move_toward(direction * DASH_SPEED, ACCELERATION)\n move_and_slide()\n animation_state.travel(\"Dash\")\n if Input.is_action_just_released(\"charge\"):\n state=MOVE\n \nfunc attack_state(delta):\n pass\nfunc charge_state(delta):\n print(\"charging...\")\n \n if Input.is_action_just_released(\"charge\"):\n state = MOVE\n\nAs you can see, I set the animation parameters for each state while both moving and dashing and have the animation_state travel to Dash and Move in the same way, yet the latter works and the former doesn't. As far as I can tell it's not because the Move animation state gets interrupted by the Idle animation state, as I can change directions by holding the input vertically without resetting to the Idle animation and it still works.\nHere's my animation tree setup:\n\nHere's the Move state's blendspace2D where every point on the left is the left moving animation and every point on the right is the right moving animation\n\nAnd here's the Dash state with the same setup as the Move state\n\nI've tried everything I can think of and have come up empty. Any help would be appreciated."} +{"id": "000180", "text": "I am trying to set up my game to play a specific song during a match, and then switch to different songs at the end depending on which character won (those being just placeholders here for now). I have this code in a script that I plan to call methods from in the script for the main game loop as needed:\nextends AudioStreamPlayer\n\nclass_name Sounds\n\nenum CURRENT_SONG {MATCH, CHAR1, CHAR2}\n\n@onready var matchMusic: AudioStreamPlayer = $MatchMusic\n\nfunc playMusic(song := CURRENT_SONG.MATCH) -> void:\n match song:\n CURRENT_SONG.MATCH:\n matchMusic.play()\n CURRENT_SONG.CHAR1:\n pass\n CURRENT_SONG.CHAR2:\n pass\n\nI have an AudioStreamPlayer node in the scene tree named \"MatchMusic\" with a placeholder song attached.\nI watched this tutorial, where something similar to what I'm doing above worked, and changed things around to attach sound playback to certain steps in the gameplay loop instead of just clicking a button. Instead, when I attempt to run my game, I get the error \"Cannot call method 'play' on a null value.\"\nI tested commenting out the matchMusic.play() method, replacing it with a print statement, and running it then, to make sure it's not just a problem with how I'm trying to call my playMusic() function somehow- that part seems to be working fine."} +{"id": "000181", "text": "I have just started learning Godot and already know a decent bit of C# from unity. I am trying to set up a 3D character rig. The function should be there as the docs all say it should but when I compile this code:\nusing Godot;\nusing System;\n\npublic partial class world_control : Node\n{\n SceneTree tree;\n Node3D camera;\n Node3D head;\n \n // Called when the node enters the scene tree for the first time.\n public override void _Ready()\n {\n tree = GetTree();\n camera = tree.get_current_scene().find_child(\"camera\", false, false);\n head = tree.get_current_scene().find_child(\"head\", true, false);\n }\n\n // Called every frame. 'delta' is the elapsed time since the previous frame.\n public override void _Process(double delta)\n {\n camera.set_global_position(head.get_global_position());\n camera.set_global_rotation(head.get_global_rotation());\n }\n}\n\nIt gives me these errors:\n/***/Godot/Hand_Cannon/world_control.cs(14,17): 'SceneTree' does not contain a definition for 'get_current_scene' and no accessible extension method 'get_current_scene' accepting a first argument of type 'SceneTree' could be found (are you missing a using directive or an assembly reference?)\n\n/***/Godot/Hand_Cannon/world_control.cs(21,35): 'Node3D' does not contain a definition for 'get_global_position' and no accessible extension method 'get_global_position' accepting a first argument of type 'Node3D' could be found (are you missing a using directive or an assembly reference?)\n\n(I have left out a few more errors but basically all of the functions cause this excact error)\nExtra Information:\nGodot 4.1 with C#\n.NET 8 (Installed on day of posting)\nMac OSX 13.5.1\nI can and am very willing to provide more info if needed!\nThis is a problem in which I simply don't know what to try. I am very new to Godot having installing it today. Nothing so far has worked, in fact I did not even know that the values had getter functions!"} +{"id": "000182", "text": "Hey I try to get JSON DATA. The code is below. I tried many things, but I am not able to receive a normal JSON.\nfunc makeLoginRequest():\n\n var url = \"https://chocolatefactory-api.dreamstudio.my/login\"\n \n\n var data = {\n \"username\": \"test0001\",\n \"password\": \"test0001\"\n }\n \n\n var http_request = HTTPRequest.new()\n \n\n add_child(http_request)\n \n\n http_request.connect(\"request_completed\",Callable(self,\"_on_request_completed\"))\n \n\n http_request.request(url, [\"Content-Type: application/json\"], HTTPClient.METHOD_POST, JSON.new().stringify(data))\n \n\nfunc _on_request_completed(result, response_code, headers, body):\n\n if response_code == 200:\n\n print(\"Anfrage erfolgreich!!\")\n var json_data = JSON.new().parse(str(body))\n \n \n else:\n print(\"Fehler bei der Anfrage. Fehlercode:\", response_code)\n\nI always get this:\n[123, 10, 32, 32, 34, 116, 111, 107, 101, 110, 34, 58, 32, 34, 101, 121, 74, 104, 98, 71, 99, 105, 79, 105, 74, 73, 85, 122, 73, 49, 78, 105, 73, 115, 73, 110, 82, 53, 99, 67, 73, 54, 73, 107, 112, 88, 86, 67, 74, 57, 46, 101, 121, 74, 49, 99, 50, 86, 121, 83, 87, 81, 105, 79, 105, 73, 49, 90, 71, 82, 108, 77, 68, 65, 119, 77, 106, 100, 108, 89, 87, 77, 48, 90, 68, 77, 120, 79, 71, 81, 53, 78, 106, 103, 51, 90, 87, 85, 53, 89, 87, 74, 108, 77, 122, 107, 48, 77, 105, 73, 115, 73, 110, 86, 122, 90, 88, 74, 117, 89, 87, 49, 108, 73, 106, 111, 105, 100, 71, 86, 122, 100, 68, 65, 119, 77, 68, 69, 105, 76, 67, 74, 108, 101, 72, 65, 105, 79, 106, 69, 51, 77, 68, 77, 50, 78, 122, 89, 50, 77, 106, 82, 57, 46, 102, 73, 52, 74, 55, 53, 82, 106, 88, 74, 51, 77, 122, 66, 117, 121, 66, 90, 77, 122, 98, 82, 51, 108, 81, 120, 66, 120, 54, 115, 81, 81, 48, 118, 70, 90, 114, 111, 104, 71, 110, 106, 65, 34, 44, 10, 32, 32, 34, 117, 115, 101, 114, 73, 100, 34, 58, 32, 34, 53, 100, 100, 101, 48, 48, 48, 50, 55, 101, 97, 99, 52, 100, 51, 49, 56, 100, 57, 54, 56, 55, 101, 101, 57, 97, 98, 101, 51, 57, 52, 50, 34, 44, 10, 32, 32, 34, 117, 115, 101, 114, 110, 97, 109, 101, 34, 58, 32, 34, 116, 101, 115, 116, 48, 48, 48, 49, 34, 10, 125, 10]"} +{"id": "000183", "text": "I'm a bit confused about the correct use of TypedArrays in gdextension. How can I expose them correctly to gdscript?\n~~Consider a class MyClass : RefCounted.~~\nEDIT: Consider a class MyClass : Resource.\nI noticed godot defines a MAKE_TYPED_ARRAY macro, which it uses to create the TypedArrray implementations for it's Variants.\nIs it necessary or beneficial to use this for our own types? For example: MAKE_TYPED_ARRAY(MyClass, Variant::OBJECT)\nDoes the TypedArray type need to be registered to expose it?\nAnd most of all, why do I still need to cast the bloody values because they seem to be returned as Variant despite the typing of the TypedArray.\nIt's all just a little confusing and not a lot of documentation seems to be out there.\nSee Creating a new array type compatible with `godot::Variant`? for some additional context.\nEDIT: after some experimenting it seems like calling MAKE_TYPED_ARRAY(MyClass, Variant::OBJECT) doesn't seem to make much of a difference either way. At least not from the gdscript user perspective.\nThe same goes for registering TypedArray\nI managed to expose a TypedArray, and assigning an Array[MyClass] from gdscript works just fine.\nThe editor however is a different story.\nConsider the following:\n//test.h\nclass TestObject : public Resource { \n//... \n};\n\nclass TypedArrayTest : public Resource {\n GDCLASS(TypedArrayTest, Resource)\n\nprivate:\n TypedArray test_array;\n\nprotected:\n static void _bind_methods();\n\npublic:\n TypedArray get_test_array() const;\n void set_test_array(const TypedArray p_value);\n};\n\n//test.cpp\nvoid TypedArrayTest::_bind_methods() {\n ClassDB::bind_method(D_METHOD(\"get_test_array\"), &TypedArrayTest::get_test_array);\n ClassDB::bind_method(D_METHOD(\"set_test_array\", \"p_value\"), &TypedArrayTest::set_test_array);\n ClassDB::add_property(\"TypedArrayTest\", PropertyInfo(Variant::ARRAY, \"test_array\"), \"set_test_array\", \"get_test_array\");\n}\n\n//implementation of get and set \n\nSo now we @export this in gdscript.\n@export var test:TypedArrayTest\n\nThis works, but the array in the property editor shows up like an untyped array.\nTo assign an element, I first need to select Object as a type. Next I need to assign New TestObject instance to that element.\nAll this for just one element, that's a bit cumbersome.\nThe editor does check the type though, so that is alright. You can't assign any other type than Object, or any other instance than TestObject.\nThe most annoying part is that the editor shows the whole list of types in every step of this assignment.\nIs there any way to limit this?\nWhen you recreate this whole thing in plain gdscript, the editor does recognize @export var gd_test: Array[TestObjectGdsImplemetation] corretly.\nIn this case he editor does display the intended behaviour. You can just assign new TestObjectGdsImplemetation instances in one simple step, without going through all the nonsense I described above.\nThis leads me to believe it should be possible to achieve the same from GDExtension as well.\ngodotforums thread (with screenshots): https://godotforums.org/d/38121-how-do-i-properly-configure-typedarrays-in-gdextension\nofficial godot forum thread: https://forum.godotengine.org/t/how-to-set-up-a-typedarray-with-a-custom-type-in-gdextension/37652"} +{"id": "000184", "text": "I have a @tool-annotated empty scene which extends Node3D that I use for marking NPC spawn locations. On ready when is_editor_hint I add a mesh instance with geometry so the location is visible in the editor. The added mesh gets displayed in the editor just fine and everything works as expected except that clicking the tool-created mesh (which is child of the instantiated scene) in the editor doesn't actually select the parent scene. If I add the very same mesh to the scene statically instead of on _ready, I can select the instantiated scene by clicking the mesh as I would expect.\nIs there any way to make this \"dynamically created child\" clickable in the editor? Is there anything that the editor sets when adding a child node that doesn't happen when using add_child? I've tried reparenting or changing owner but nothing helped. Also the child's AABBs are correct.\nFor reference the Node3D scene's script goes like this:\nfunc _ready() -> void:\n if Engine.is_editor_hint():\n var mesh_instance = MeshInstance3D.new()\n mesh_instance.position.y += 0.03\n mesh_instance.cast_shadow = false\n var qmesh: QuadMesh = QuadMesh.new()\n qmesh.size = Vector2(0.4, 0.4)\n mesh_instance.mesh = qmesh\n add_child(mesh_instance)\n\nThank you for any help, trying to find specific spawner in the tree can be really frustrating.\nEDIT:\nSolved by the issue linked in Bugfish' comment: Setting owner to self in the tool script makes it work as expected. The owner needs to be set AFTER the child is added.\nadd_child(mesh)\nmesh.owner = self\n\nseems to work, while\nmesh.owner = self\nadd_child(mesh)\n\ndoes not"} +{"id": "000185", "text": "Error: expected string constant as 'preload' argument\n--- Debugging process started ---\nGodot Engine v3.5.2.stable.custom_build - https://godotengine.org\nOpenGL ES 3.0 Renderer: Mesa Intel(R) HD Graphics 620 (KBL GT2)\nAsync. shader compilation: OFF\n\n--- Debugging process stopped ---\n res://scripts/test.gd:66 - Parse Error: expected string constant as 'preload' argument.\n\nCode Snippet\n# this is use for set backgound phato\nfunc initialize_background(image_path: String):\n var background = TextureRect.new()\n texture = preload(image_path)\n background.texture = texture\n background.rect_min_size = get_viewport_rect().size\n add_child(background)\n background.raise()\n\nMake sure to review line 66 in your actual code and ensure that the 'preload' argument is a string constant as the error suggests. If you need further assistance, you can provide the relevant code snippet around line 66 for more detailed help.\nextends Node2D\n\nvar label: Label\nvar button1: Button\nvar button2: Button\nvar button3: Button\nvar texture : Texture\n\n\nfunc _ready():\n initialize_window()\n initialize_label(\"Hello, Godot!\", Vector2(500, 200), Label.ALIGN_CENTER)\n initialize_buttons()\n initialize_background(\"res://icon.png\") # Replace with your actual image path\n\n\nfunc _process(delta: float) -> void:\n update_label_with_mouse()\n\n# Window initialization\nfunc initialize_window():\n OS.set_window_resizable(false)\n OS.set_window_minimized(false)\n OS.set_window_maximized(false)\n OS.set_window_title(\"NOSTALGIA WARP WORLD\")\n\n# Label initialization\nfunc initialize_label(text: String, min_size: Vector2, alignment: int):\n label = Label.new()\n label.text = text\n label.rect_min_size = min_size\n label.align = alignment\n center_label()\n add_child(label)\n\n# Button initialization\nfunc initialize_buttons():\n button1 = create_button(\"Button 1\", [1], -50)\n button2 = create_button(\"Button 2\", [2], 0)\n button3 = create_button(\"Button 3\", [3], 50)\n\nfunc create_button(text: String, arguments: Array, y_offset: float) -> Button:\n var button = Button.new()\n button.text = text\n button.rect_min_size = Vector2(150, 30)\n button.connect(\"pressed\", self, \"_on_button_pressed\", arguments)\n button.rect_position.x = (get_viewport_rect().size.x - button.rect_min_size.x) / 2\n button.rect_position.y = (get_viewport_rect().size.y - button.rect_min_size.y) / 2 + y_offset\n add_child(button)\n return button\n\n# Window size change handler\nfunc _on_size_changed():\n center_label()\n\n# Center the label\nfunc center_label():\n label.rect_position.x = (get_viewport_rect().size.x - label.rect_min_size.x) / 2\n label.rect_position.y = (get_viewport_rect().size.y - label.rect_min_size.y) / 2\n label.set_z_index(1) # Set a z-index higher than the background\n\n\n# this is use for set backgound phato\nfunc initialize_background(image_path: String):\n var background = TextureRect.new()\n texture = preload(image_path)\n background.texture = texture\n background.rect_min_size = get_viewport_rect().size\n add_child(background)\n background.raise()\n\n\n\n# Button pressed handler\nfunc _on_button_pressed(button_id):\n match button_id:\n 1:\n label.text = \"Button 1 Pressed!\"\n 2:\n label.text = \"Button 2 Pressed!\"\n 3:\n label.text = \"Button 3 Pressed!\"\n _:\n pass # Handle unexpected button_id values if necessary\n\n# Update label with mouse position\nfunc update_label_with_mouse():\n var mouse_position = get_global_mouse_position()\n var mouse_text = \"Mouse X: \" + str(mouse_position.x) + \"Mouse Y: \" + str(mouse_position.y)\n # label.text = mouse_text\n # center_label()\n print(mouse_text)\n\nWhat I Tried and Expected\n\nI attempted to set a background photo for the scene's background, but encountered errors in the process.\nI expected the background image to be set without any issues."} +{"id": "000186", "text": "When using ResourceLoader's .load_threaded_request, .load_threaded_get_status and .load_threaded_get I can't get a loaded resource a second time. I expected it to be cached and now I'm unsure if that is the intended functionality.\nI am using Godot 4.2 on MacOS (Intel) and exploring asynchronous resource loading functionality. Here's a snippet showing some of my code:\nconst ROCKET_SCENE_FILE: String = \"res://scenes/rocket.tscn\"\n\n\nfunc _ready():\n ResourceLoader.load_threaded_request(ROCKET_SCENE_FILE) # asynchronously load the rocket scene\n\n\nfunc fire_rocket() -> void:\n var rocket_loading_status: ResourceLoader.ThreadLoadStatus = ResourceLoader.load_threaded_get_status(ROCKET_SCENE_FILE)\n if rocket_loading_status == ResourceLoader.THREAD_LOAD_LOADED:\n var rocket_scene: Resource = ResourceLoader.load_threaded_get(ROCKET_SCENE_FILE) # get the loaded scene\n var rocket_instance: Node = rocket_scene.instantiate() # instance the scene\n add_child(rocket_instance) # add to this scene\n else:\n assert(rocket_loading_status == ResourceLoader.THREAD_LOAD_IN_PROGRESS, \"FAULT failure loading rocket resource: ResourceLoader.ThreadLoadStatus = \" + str(rocket_loading_status))\n\n\nfunc _process(delta: float) -> void:\n if handle_actions:\n process_actions() # read input and set state/triggers\n \n if shoot == true:\n fire_rocket()\n shoot = false # reset the shoot trigger\n\nThe first time fire_rocket executes everything works as I expected: the status is THREAD_LOAD_LOADED and I successfully get the resource from the ResourceLoader. Strangely, the second time the status is THREAD_LOAD_INVALID_RESOURCE - meaning, The resource is invalid, or has not been loaded with load_threaded_request.\nI thought that ResourceLoader would cache the loaded resource since the default cache mode for ResourceLoader.load_threaded_request is CACHE_MODE_REUSE. There's no explanation of what the cache modes mean, in the documentation, so I'm not certain if this is a bug or my misinterpretation. Also my C++ knowledge is so old that I don't really understand what's going on in the source code - so looking at that wasn't really helpful for me.\nDoes anyone know if ResourceLoader is intended to cache resources so that they can be retrieved multiple times, or if it is a one time retrieval only?"} +{"id": "000187", "text": "I can create the code and logic, and it works fine. However, the problem arises when my camera goes out of the range of the Node2d node; it loses control of the Camera2D controls, and I never expected this issue in Godot.\nI highly recommend you watch this video as it can provide you with a better understanding of the issue.\nDownload link: https://firebasestorage.googleapis.com/v0/b/alor28.appspot.com/o/uploads%2FScreencast%202023-12-31%2018%3A02%3A34.mp4?alt=media&token=16769086-c2e0-494b-8336-d670468aa0f3\ni want to fix this zoom issue i don't want this types of issue i don't think it come with my Camera controller script.\nalso here is my code if need\nextends Camera2D\n\n# Player reference\nvar player\n\n# Camera settings\nvar default_zoom = Vector2(0.4, 0.4) # Default zoom level\nvar camera_offset = Vector2(0, 0) # Offset between player and camera\nvar follow_speed = 0.7 # Speed of camera following\nvar zoom_speed = 0.2 # Speed of zooming while moving\nvar zoom_step = 0.1 # Zoom step when moving or jumping\n\n# Track player's previous position to determine movement\nvar prev_player_position = Vector2.ZERO\n\nfunc _ready():\n # Attempt to find the player node dynamically with a maximum number of attempts\n var max_attempts = 10\n var current_attempt = 0\n\n while player == null and current_attempt < max_attempts:\n player = get_node_or_null(\"/root/Game/Player\")\n current_attempt += 1\n\n if player == null:\n print(\"Player node not found. Retrying in 1 second...\")\n yield(get_tree().create_timer(1.0), \"timeout\")\n\n if player == null:\n print(\"Player node not found after multiple attempts.\")\n\n # Set the initial zoom to the default value\n set_zoom(default_zoom)\n\n\nfunc _process(_delta):\n # Rest of the code remains unchanged\n if player:\n var target_pos = player.global_position + camera_offset\n position = position.linear_interpolate(target_pos, follow_speed)\n clamp_to_scene_bounds()\n apply_zoom_step()\n\n\nfunc clamp_to_scene_bounds():\n # Optionally, you can add code here to clamp the camera position to the scene bounds\n # For example:\n var viewport_rect = get_viewport_rect()\n position.x = clamp(position.x, viewport_rect.position.x, viewport_rect.size.x)\n position.y = clamp(position.y, viewport_rect.position.y, viewport_rect.size.y)\n\nfunc apply_zoom_step():\n if player.global_position != prev_player_position:\n # Player is moving or jumping\n var distance_to_player = position.distance_to(player.position)\n var target_zoom = clamp(1.0 / (distance_to_player * zoom_speed), 0.5, 2.0)\n\n # Apply zoom step\n var new_zoom = Vector2(default_zoom.x * target_zoom, default_zoom.y * target_zoom)\n new_zoom = new_zoom.linear_interpolate(get_zoom(), zoom_step)\n set_zoom(new_zoom)\n\n # Update the previous player position for the next frame\n prev_player_position = player.global_position"} +{"id": "000188", "text": "I am trying to display a Node2D which contains text and animation when the player enters an interaction zone. The Node2D I am trying to display, called EToInteract, is displaying, but something is messed up with its position and movement. Here is a video of what happening. You can see that the interact display is on the top left, and its not moving. Or rather, it is static on the screen. In my Interaction Manager, I used the following code:\n...\nNode2D interactDisplay;\n\npublic override void _Ready()\n{\n ...\n interactDisplay = GetNode(\"/root/WorldNode/InterfaceManager/EToInteract\");\n interactDisplay.Hide();\n ...\n}\n\npublic override void _Process(double delta)\n{\n if (activeAreas.Count > 0 && canIneract)\n {\n ...\n interactDisplay.GlobalPosition = otherBody.GlobalPosition;\n interactDisplay.Show();\n }\n else\n interactDisplay.Hide();\n}\n\notherBody is the purple NPC character you see in the video.\nAnd this is the layout of my World Scene\n\nI am trying to have the interact display to be on top of the NPC constantly, instead of moving around with the player like you see with the video. I though positioning the display's global position to the NPC's global position would work, but that doesn't seam to be the case. Any suggestions?"} +{"id": "000189", "text": "After exporting my project to Android a directory named .godot/exported appeared in the root of my Godot project. It seems to contain some cache for the resources I have exported.\nProblem is, autofill gets scenes from it when I type stuff like get_tree().change_scene_to_file and it messes with my ability to find the resources I need.\nDo you know if there is an option in Godot 4 which allows users to exclude specific directories from autofill?\n\nEdit1: I am using Godot v4.1.1 under Ubuntu 22.04.3 LTS. I thought this information might be useful.\nI have .godot/ directory in my .gitignore and the files from there are not tracked by git but are still popping up in the autofill for the change_scene_to_file method. I checked that preload() doesn't show resources from .godot/ directory as this Godot doc page describes but it is not my current issue."} +{"id": "000190", "text": "I want to implement a double jump mechanic in Godot with C#, but it does not work as expected. Maybe someone sees a problem in my code and could give me a hint! :)\nMy character jumps one time and then goes straight down, instead of doing a double jump.\nAppreciate the help.\nusing Godot;\n\npublic partial class Player : CharacterBody2D\n{\nprivate float _runSpeed = 750;\nprivate float _jumpSpeed = -1000;\nprivate float _gravity = 2500;\nprivate int _jump_counter = 0;\nprivate int _extra_jumps = 1;\n\n\npublic override void _PhysicsProcess(double delta)\n{\n var velocity = Velocity; \n velocity.X = 0;\n\n var right = Input.IsActionPressed(\"move_right\");\n var left = Input.IsActionPressed(\"move_left\");\n var jump = Input.IsActionPressed(\"move_jump\");\n \n //TODO -> Implement double jump\n if (jump && _jump_counter < _extra_jumps){\n velocity.Y = _jumpSpeed;\n _jump_counter += 1;\n GD.Print(_jump_counter + \" \" + _extra_jumps);\n }\n \n if (right){\n velocity.X += _runSpeed;\n }\n \n if (left){\n velocity.X -= _runSpeed;\n }\n \n if (IsOnFloor()){\n _jump_counter = 0;\n }\n \n Velocity = velocity;\n \n var animatedSprite2D = GetNode(\"Sprite2D\");\n\n if (velocity.Length() > 0)\n {\n velocity = velocity.Normalized() * _runSpeed;\n animatedSprite2D.Play();\n }\n else\n {\n animatedSprite2D.Stop();\n }\n \n if (velocity.X != 0)\n {\n animatedSprite2D.Animation = \"walking\";\n animatedSprite2D.FlipV = false;\n // See the note below about boolean assignment.\n animatedSprite2D.FlipH = velocity.X < 0;\n }\n \n velocity = Velocity;\n velocity.Y += _gravity * (float)delta;\n Velocity = velocity;\n MoveAndSlide(); \n }\n}\n\nThank you in advance."} +{"id": "000191", "text": "I am currently trying to build my first ever game. I had some great progress in the first few weeks, but now I want to improve some things that are annoying me. My major pain point right now is the dialog system.\nIn my project I have NPCs that the player can interact with if they come close. To achieve this I use the body_entered and body_exited signals in the NPC's script like this:\nfunc _on_worker_talking_area_body_entered(body):\n if body.name == \"Player\":\n $BubbleAnimation.show()\n $BubbleAnimation.play(\"on\")\n $DialogLayer/Dialog/Panel/DialogText.text = approach_text \n $DialogLayer.show()\n \n \nfunc _on_worker_talking_area_body_exited(body):\n if body.name == \"Player\":\n $BubbleAnimation.stop()\n $BubbleAnimation.hide()\n $DialogLayer.hide()`\n\nThis works nicely, so that when the player comes near an NPC a little bubble appears. Now the player can interact using different buttons (\"space\", \"y\" and \"n\") to trigger a dialog depending on the state of the NPC. I get these inputs like this:\nfunc _input(event: InputEvent):\n if event.is_action_pressed(\"interaction\") and $BubbleAnimation.visible:\n if worker_name == \"welcome_worker\" and state == 0:\n await $DialogLayer.split_text(get_current_text())\n state = 1\n state_changed.emit()\n \n if (worker_name == \"crane_worker\" or worker_name == \"milling_machine_worker\") \\\n and state == 1:\n await $DialogLayer.split_text(get_current_text())\n state = 2 \n \n if (worker_name == \"crane_worker\" or worker_name == \"milling_machine_worker\") \\\n and state == 2:\n await $DialogLayer.split_text(get_current_text()) \n \n if event.is_action_pressed(\"no\") and state == 2 and $BubbleAnimation.visible:\n state = 3\n await $DialogLayer.split_text(get_current_text())\n state_changed.emit()\n\n if event.is_action_pressed(\"yes\") and state == 2 and $BubbleAnimation.visible:\n state = 4\n await $DialogLayer.split_text(get_current_text())\n state_changed.emit()`\n\nThe text is then displayed in the DialogLayer which displays it by scrolling letter by letter:\nfunc split_text(input_text: String) -> void:\n $Dialog.show()\n var text_array = input_text.split(\"/\")\n for text_piece in text_array:\n scroll_text(text_piece)\n await get_tree().create_timer(5.5).timeout\n $Dialog.hide()\n\nfunc scroll_text(input_text: String) -> void:\n var text = input_text\n var visible_characters = 0\n for i in text.length():\n visible_characters += 1\n await get_tree().create_timer(0.1).timeout\n $Dialog/Panel/DialogText.text = text.left(visible_characters)`\n\nNow there are several things that don't work properly here:\n\nThe player can leave the \"talking area\" while the text is scrolling, which will stop displaying the text, but the scrolling continues without the player seeing it.\nIt is super annoying to wait for 5.5 seconds for each line of dialog, if that line of dialog is super short. So it would be nice to give the player the ability to skip to the end using space (\"interaction\").\nSometimes this whole set-up seems to fail completely for reasons I just can't figure out. In this case the dialog just disappears suddenly.\n\nSo I was wondering how could I set up such a dialog system properly? What are some guidelines or tips how to get this working? I would really apreciate every input, as I am quite stuck right now."} +{"id": "000192", "text": "Im making a game in Godot 4 and have a main world scene that the player is on. It has an Area2d node that changes the current scene to a UI scene when the player enters the area2D. The area2D takes a PackedScene and locationName as export variables. It loads the UI scene (named \"Location\") fine but I cannot get the UI scene to load the locationName I want. Also, the UI scene is not in the world scene (ie. where the area2d is and player are)\nI have tried multiple methods including moving the UI scene to the world scene and using signals, also creating set gets. The problem seems to be coming from the fact that the label node on the UI is ready after the name is changed, so I get a nil exception.\nLocation.gd:\nextends Node\n\n@onready var itemList = $VBoxContainer/HBoxContainer/VBoxContainer/ItemList\n@onready var nameLabel = $VBoxContainer/HBoxContainer/VBoxContainer/LocationName\n@onready var image = $VBoxContainer/HBoxContainer/Image\n\nvar localName: String = \"test\"\n\nfunc setLocationName(newName: String):\n self.localName = newName\n print(self.localName)\n \nfunc setLocationImage(newImage: Image):\n image.texture = newImage\n\n# Called when the node enters the scene tree for the first time.\nfunc _ready():\n print(\"ready\")\n if is_node_ready():\n update()\n #generate 3 rgcs and add them to itemList, display their name, gang, and favor\n #image.texture = ResourceLoader.load(location image)\n \nfunc update():\n print(\"update\")\n nameLabel.text = self.localName\n print(self.localName)\n\nEntryPoint.gd:\nextends Area2D\n\n@export var targetLocation : PackedScene\n@export var locationName: String\n\nfunc _ready():\n pass\n\nfunc _on_body_entered(body):\n var location = targetLocation.instantiate()\n location.setLocationName(locationName)\n get_tree().change_scene_to_packed(targetLocation)\n\n \n\nenter image description here"} +{"id": "000193", "text": "I'm making a grid based 2d puzzle game where you move each character once, and then end your turn. When your turn ends, water spreads to all surounding free grids next to the current water blocks. If water enters an area inhabited by a character, the character drowns. I can get the water to spread the first round, but the created water blocks don't continue to spread\nextends Area2D\n\nsignal next_turn\n\n@onready var check_up = $CollisionShape2D/ray_up\n@onready var check_down = $CollisionShape2D/ray_down\n@onready var check_left = $CollisionShape2D/ray_left\n@onready var check_right = $CollisionShape2D/ray_right\n\nvar water_block = [\"res://water.tscn\", ]\nvar water_block_count = 1\n\n# Called when the node enters the scene tree for the first time.\nfunc _ready():\n pass # Replace with function body.\n\n\n# Called every frame. 'delta' is the elapsed time since the previous frame.\nfunc _process(delta):\n pass\n\n\nfunc _on_hud_end_turn():\n var count = 0\n var new_water = 0\n while count != water_block_count:\n var water_scene = load(water_block[count])\n if check_up && check_down && check_left && check_right:\n water_block.remove_at(count)\n new_water -= 1\n if not check_up.is_colliding():\n var water = water_scene.instantiate()\n water.position += Vector2(0, -64)\n add_child(water)\n count += 1\n new_water += 1\n water_block[count] = get_node(water)\n if not check_down.is_colliding():\n var water = water_scene.instantiate()\n water.position += Vector2(0, 64)\n add_child(water)\n count += 1\n new_water += 1\n water_block[count] = get_node(water)\n if not check_left.is_colliding():\n var water = water_scene.instantiate()\n water.position += Vector2(-64, 0)\n add_child(water)\n count += 1\n new_water += 1\n water_block[count] = get_node(water)\n if not check_right.is_colliding():\n var water = water_scene.instantiate()\n water.position += Vector2(64, 0)\n add_child(water)\n count += 1\n new_water += 1\n water_block[count] = get_node(water)\n \n water_block_count += new_water\n next_turn.emit()\n\nThis is my current code block... it isnt working at all. I first tried to have my water be a packed scene in inself. This allowed me to instantiate the new water blocks being created. The problem was it would only check from the packed scene. This was my attempt to fix that by saving the water child node created into an array so i could call that array to create spread at that node. I also added a part in the beginning to delete nodes that are no long spreading from the array. I get \"Invalid type in function 'get_node' in base 'Area2D (Water.gd)'. Cannot convert argument 1 from Object to NodePath\""} +{"id": "000194", "text": "I have 3 vectors, xyz, and i want to change only 1 and then calculate the other 2 in respect to their former directions (the closest possible to them). And i want to do it the most efficient way in terms of memory usage/game design.\nsmall illustration of the problem\n\nEDIT (after some constructive comments from Stef):\nWhat we have:\n\nx,y,z 3d vector, normalized and with same origin (0,0,0)\ny' (the up vector changes depending on the surface)\n\nWhat (i believe) we need:\n\nz' or x' (and all mutually perpendicular)\n\nWhat i tried:\n\nz' with the help of the y - y' angle around the x-axis and vice versa with x' and the z-axis. i think this \"could\" work with the proper linear algebra knowledge but would still mean an euler rotation, right? i prefer to only rotate with quaternions even though i still earn to figure them out\nusing the cross-product of y' and x to get z' and then creating the crossproduct of z' and y' to get x'. in theory it should work? my 3 dot-product are never 0, not one of them\n\nWhat i desire:\n\nchanging the x and z vector based on the angle of y and y' via a quaternion. i am working in godot and still figuring things out.. that said, i just found this:\n\n\nQuaternion Quaternion ( Vector3 arc_from, Vector3 arc_to )\n\nConstructs a quaternion representing the shortest arc between two points on the surface of a sphere with a radius of 1.0."} +{"id": "000195", "text": "i made a autoload file named \"saveLoad.gd\"\nThe content of \"saveLoad.gd\" is:\nvar a = 0\nvar b = 1\n\nthen, i tried to acess \"saveLoad.gd\" file in this way:\nvar items = [a, b]\n\nfunc store(itemname):\n if saveLoad.itemname == 0:\n itemname.visible = false\n\nfunc _ready:\n store(items)\n\n(there are a button named \"a\" and \"b\")\nbut it doesnt work well\ni think because engine recognize saveLoad.itemname as variable in \"saveLoad.gd\"\nnot saveLoad.a or saveLoad.b\nHow can i make this work?\nedit)\nThe comment code block doesn't look good, so i'll add modified code here\nfunc store(itemname:Array):\n for i in range(0, len(itemname), +1):\n var itemname2 = itemname[i]\n if Savenload.get(itemname2) == 0:\n get(itemname2).visible = false"} +{"id": "000196", "text": "I am trying to create an FPS game in godot 4, and I run into an issue where the RayCast does not collide with a temporary object that would give the player ammo. When I try to interact, nothing occurs, and when I have checked, the RayCast is not colliding.\nI tried changing the collision mask and the collision layer, neither have worked.\nHere's the code for the player:\nextends CharacterBody3D\n\nvar health = 100.0\nvar VC_REP = 0.0\nvar RB_REP = 0.0\n\nvar ammo = 30\nvar ammo_cap = 150\nvar mags = 5\n\nvar quest_title = \"Operation: Viewpoint\"\nvar quest_desc = \"Infiltrate Outlook Alpha\"\n\nvar speed\nconst SPRINT_SPEED = 8.0\nconst WALK_SPEED = 5.0\nconst JUMP_VELOCITY = 4.5\nconst SENSITIVITY = 0.003\n\nconst BOB_FREQ = 2.0\nconst BOB_AMP = 0.08\nvar t_bob = 0.0\n\n# Get the gravity from the project settings to be synced with RigidBody nodes.\nvar gravity = 9.8\n\nvar bullet = load(\"res://bullet.tscn\")\nvar instance\n\n@onready var head = $Head\n@onready var camera = $Head/Camera3D\n\n@onready var gun_anim = $\"../Player/Head/Camera3D/ar1/AnimationPlayer\"\n@onready var gunbarrel = $\"../Player/Head/Camera3D/ar1/M4a1/RayCast3D\"\n\n@onready var interaction_ray = $Head/Camera3D/RayCast3D\n\nfunc _ready():\n Input.set_mouse_mode(Input.MOUSE_MODE_CAPTURED)\n \n\nfunc _unhandled_input(event):\n if event is InputEventMouseMotion:\n head.rotate_y(-event.relative.x * SENSITIVITY)\n camera.rotate_x(-event.relative.y * SENSITIVITY)\n camera.rotation.x = clamp(camera.rotation.x, deg_to_rad(-40), deg_to_rad(60))\n\nfunc _physics_process(delta):\n # Add the gravity.\n if not is_on_floor():\n velocity.y -= gravity * delta\n\n # Handle jump.\n if Input.is_action_just_pressed(\"jump\") and is_on_floor():\n velocity.y = JUMP_VELOCITY\n \n if Input.is_action_pressed(\"sprint\"):\n speed = SPRINT_SPEED\n else:\n speed = WALK_SPEED\n\n # Get the input direction and handle the movement/deceleration.\n # As good practice, you should replace UI actions with custom gameplay actions.\n var input_dir = Input.get_vector(\"left\", \"right\", \"up\", \"down\")\n var direction = (head.transform.basis * Vector3(input_dir.x, 0, input_dir.y)).normalized()\n if is_on_floor():\n if direction:\n velocity.x = direction.x * speed\n velocity.z = direction.z * speed\n else:\n velocity.x = lerp(velocity.x, direction.x * speed, delta * 7.0)\n velocity.z = lerp(velocity.z, direction.z * speed, delta * 7.0)\n else:\n velocity.x = lerp(velocity.x, direction.x * speed, delta * 3.0)\n velocity.z = lerp(velocity.z, direction.z * speed, delta * 3.0)\n \n if is_on_floor() and health == 0:\n velocity.x = 0\n velocity.z = 0\n \n t_bob += delta * velocity.length() * float(is_on_floor())\n camera.transform.origin = _headbob(t_bob)\n \n if Input.is_action_pressed(\"shoot\"):\n if (!gun_anim.is_playing() and ammo != 0):\n gun_anim.play(\"Shoot\")\n instance = bullet.instantiate()\n instance.position = gunbarrel.global_position\n instance.transform.basis = gunbarrel.global_transform.basis\n get_parent().add_child(instance)\n ammo -= 1\n \n if (Input.is_action_just_pressed(\"reload\") and ammo == 0 and mags != 0):\n ammo += 30\n mags -= 1\n \n if interaction_ray.get_collider() != null and interaction_ray.get_collider().is_in_group(\"Ammo\"):\n print(\"Ammo\")\n \n print(ammo)\n print(mags)\n \n move_and_slide()\n\nfunc _headbob(time) -> Vector3:\n var pos = Vector3.ZERO\n pos.y = sin(time * BOB_FREQ) * BOB_AMP\n return pos\n\n\nAnd here is the code for the temporary object.\nextends MeshInstance3D\n\n# Called when the node enters the scene tree for the first time.\nfunc _ready():\n pass # Replace with function body.\n\n# Called every frame. 'delta' is the elapsed time since the previous frame.\nfunc _process(delta):\n pass\n\nfunc give_ammo(mags):\n mags += 3\n queue_free()\n return mags\n\n\nThe RayCast is supposed to call the give_ammo() function, and use mags to call the variable.\nAny help would be much appreciated, thank you in advance!"} +{"id": "000197", "text": "During getting improvement to my custom import resource plugin I faced with need of set custom (re)importing params such as: compress mode (vram compressed or etc), channel pack (optimised or RGBFriendly). But I didn't find any clues in source classes or the documentation of Godot. How can I manually import file with setting import settings precisely?\nI've tried looking through various of available import-named classes (EditorImportPlugin, EditorInterface, ResourceImporterTexture), but none of them could help me in any way."} +{"id": "000198", "text": "I have a scene called Planet and a script like that:\n1. extends Node2D\n2. \n3. @export var radius: float\n4. \n5. func collide(other:Node2D):\n6. if other is Planet :\n7. if(radius > other.radius):\n8. print(\"I win\")\n9. else:\n10. print(\"I loose\")\n11. else:\n12. print(\"Not a planet\")\n\nThe line 6 gives me an error Could not find type \"Planet\" in the current scope.\nHow do I do these kinds of tests where I check that a scene object is of particular type(current scene's type)?\nAlso, is it possible to specify a parameter type (line 5) to a particular scene? Smth like\nfunc collide(other:Planet):\n\nI know, it doesn't change much since the language is dynamically typed. But that would help with documentation and maybe autocomplete."} +{"id": "000199", "text": "I'm working on porting a unity project that I made a while ago over to Godot.\nIn Unity I have a class that looks like this:\npublic class MeshGenerator \n{\n\n public SquareGrid squareGrid;\n public MeshFilter walls;\n\n void CreateWallMesh(int[,] map, float squareSize, int tileAmount){\n MeshCollider currentColliders = walls.gameObject.GetComponent();\n Destroy(currentColliders);\n CalculateMeshOutlines();\n\n List wallVertices = new List();\n List wallTriangles = new List();\n Mesh wallMesh = new Mesh();\n float wallHeight = 5;\n\n foreach(List outline in outlines){\n for(int i = 0; i < outline.Count-1; i++){\n int startIndex = wallVertices.Count;\n wallVertices.Add(vertices[outline[i]]); // left vertex\n wallVertices.Add(vertices[outline[i+1]]); // right vertex\n wallVertices.Add(vertices[outline[i]] - Vector3.up * wallHeight); // bottom left vertex\n wallVertices.Add(vertices[outline[i+1]] - Vector3.up * wallHeight); // bottom right vertex\n\n wallTriangles.Add(startIndex + 0);\n wallTriangles.Add(startIndex + 2);\n wallTriangles.Add(startIndex + 3);\n\n wallTriangles.Add(startIndex + 3);\n wallTriangles.Add(startIndex + 1);\n wallTriangles.Add(startIndex + 0);\n }\n }\n wallMesh.vertices = wallVertices.ToArray();\n wallMesh.triangles = wallTriangles.ToArray();\n\n MeshCollider wallCollider = walls.gameObject.AddComponent();\n wallCollider.sharedMesh = wallMesh;\n\n float textureScale = walls.gameObject.GetComponentInChildren ().material.mainTextureScale.x;\n float increment = (textureScale / map.GetLength(0));\n Vector2[] uvs = new Vector2[wallMesh.vertices.Length];\n float[] uvEntries = new float[]{0.5f,increment}; \n\n for (int i = 0; i < wallMesh.vertices.Length; i++) \n {\n float percentY = Mathf.InverseLerp ((-wallHeight) * squareSize, 0, wallMesh.vertices [i].y) * tileAmount * (wallHeight / map.GetLength(0));\n uvs [i] = new Vector2(uvEntries[i % 2],percentY);\n }\n\n wallMesh.uv = uvs;\n wallCollider.sharedMesh = wallMesh;\n walls.mesh = wallMesh;\n }\n\nAnd I'm getting an error that says\n\nThe type or namespace name 'MeshFilter' could not be found (are you\nmissing a using directive or an assembly reference?)\n\nI believe it's referencing this unity mesh, and I'm curious what the correct way to do it in Godot is.\n\nEdit:\nIn case it helps, what the class does is takes a random 2D map and builds up walls around the area the player is supposed to travel through. Here's a sample:"} +{"id": "000200", "text": "Camera or other nonmoving objects does not look like teleporting when moving the camera with damping. Player does not look like teleporting when camera is stationary. Player does not look like teleporting when camera follows player without damping. But when camera is set to follow player with damping, player looks like teleporting (other nonmoving objects still appear normal).\nCode at the camera\nfunc _process(delta):\n #Smooth transition\n var to = character.position + origin\n camPosWoutOffset = smoothTo(camPosWoutOffset, to, smoothness, delta)\n\n #offset added to smoothed position\n self.global_transform.origin = Vector3(camPosWoutOffset.x + offset.x,\n origin.y + offset.y, camPosWoutOffset.z + offset.z)\n\nfunc smoothTo(from: Vector3, to: Vector3, smoothness: float, delta: float) -> Vector3:\n var output = Vector3(damp(from.x, to.x, smoothness, delta),\n to.y, damp(from.z, to.z, smoothness, delta))\n return output\n\nfunc damp(source: float, target: float, smoothing: float, delta: float) -> float:\n return lerp(source, target, 1 - pow(smoothing, delta))\n\nCode at the player\n accel = axis * 150 * delta\n velocity += accel\n velocity = velocity.limit_length(MaxSpeed)\n move_and_slide()\n\nHow it looks:\nhttps://youtu.be/s1cYYLDoNk8\nhttps://www.youtube.com/watch?v=s1cYYLDoNk8\nI have tried lerp, slerp both with and without \"delta time\" but all had problems."} +{"id": "000201", "text": "I am trying to use the BodyShapeEntered signal for an Area3D in C#. I can't figure out what the delegate should look like for this event.\nI get the following compile error: \"No overload for 'OnCollision' matches delegate 'Area3D.BodyShapeEnteredEventHandler'\"\nUnfortunately, I can't find any info about what parameters I should use for this handler, if this isn't it. My code matches the description in https://docs.godotengine.org/en/4.2/classes/class_area3d.html#signals\nI am using version 4.2. Here is my code:\nusing Godot;\nusing System;\n\npublic partial class LocationExit : Area3D\n{\n public static readonly PackedScene DefaultExitScene = ResourceLoader.Load(\"res://Overworld/Overworld.tscn\");\n \n public PackedScene ExitScene;//alternate scene to exit to\n string StoryVariableOnExit;//increments this variable in the story on collision\n // Called when the node enters the scene tree for the first time.\n public override void _Ready()\n {\n this.BodyShapeEntered += OnCollision;\n }\n\n // Called every frame. 'delta' is the elapsed time since the previous frame.\n public override void _Process(double delta)\n {\n }\n \n \n private void OnCollision(Rid body_rid, Node3D body, int body_shape_index, int local_shape_index)\n {\n //cannot use, this will not compile\n }\n}"} +{"id": "000202", "text": "I can't find a structure representing a file in Godot to use in the editor\nI want to add a property to my Node\nsomething like:\n[GlobalClass]\npublic partial class MyObj : Node2D\n{\n [Export] public File myFile;\n\n...\n\nIt should be possible to pick a file (a file location) and assign it to that variable, is it possible?"} +{"id": "000203", "text": "I tried following Godot's documentation on MultiMeshes (the set_instance_color is specially interesting), but my models always end up washed out, way brighter than they should be (left is the correct model, right is after multimeshes applied):\n\nIt's a model made out of voxels. Each cube is put into the same multimesh and then each instance's color is changed depending on the imported model's colors."} +{"id": "000204", "text": "In the Godot engine, it is possible to define one scene as the \"main scene\". In the tutorials provided in the docs, the \"main scene\" is simply the scene your game starts with, and that is also the description I can find in various places.\nYet, it is not yet clear to me which one of the following is true about the \"main scene\", or whether actually both are possible and it is just an implementation decision:\n\nThe main scene is the first scene displyed when the game starts. In more complex games, it will be replaced once or multiple times throughout the run of the game.\nThe main scene is the root scene filling the screen throughout the game's lifetime. Thus, it contains just e.g. a title bar and a large subview or something. Whenever the \"screen\" (whatever the user sees, e.g. a full-screen main menu vs. the actual in-game screen) switches, the content of the subview changes.\n\nIs it the former, the latter, or both, depending on what I decide during implementation?"} +{"id": "000205", "text": "I created this topic so I could ask you a question, how can I change the configuration of a Godot project through code. What happens is that I want to create a configuration menu for my project, which would be to change the resolution to give an example.\nIt sounds simple and it really is, but it turns out that most of the information is very old and is no longer compatible with Godot 4, since the majority would be for version 3.\nI have tried using plugins like Godot Game Settings to make the task easier, but it turned out that it greatly reduced performance and caused many errors in its execution. Although this can be caused by many reasons not linked to the plugins."} +{"id": "000206", "text": "I am making a portal system in godot 4. There are 2 worlds, everything in the normal world is on layer 1 or culling layer 1 (lights), and vice versa for altered world. The player model that moves around has two cameras, one's layer is the normal layer (the main one) and the other camera's layer is the altered layer. I have script that gets what the altered camera is seeing, and it gives that image to a shader to draw it.\ngdscript:\nextends MeshInstance3D\n\nvar portalTexture : Texture\nvar portalMaterial : ShaderMaterial\n@onready var altered_camera= $\"../../MainCharacter/altered_camera\"\n\n\nfunc _process(delta):\n scale = Vector3(altered_camera.get_viewport().get_texture(\n).get_size().x / 1000,altered_camera.get_viewport().get_texture().get_size(\n).y / 1000,.1)\nget_active_material(0).set_shader_parameter(\"portal_texture\", \naltered_camera.get_viewport().get_texture())\n\nshader code:\nshader_type spatial;\nuniform sampler2D portal_texture;\n\n\nvoid fragment() {\n ALBEDO = texture(portal_texture, UV ).xyz;\n}\n\nWhen I look at the mesh, I see what the normal camera sees, instead of the altered one. I know this because there is a ball in front of the camera that is only in the altered world. What's going on?\nI think it may be because the camera in the normal world is selected as the current one."} +{"id": "000207", "text": "I have a TerrainManager which has a child node called Parts.\nI have an array of packed scenes (currently only two: Part1 and Part2). The TerrainManager randomly selects parts from this array and positions them sequentially.\nI'm able to test this in the editor too.\nWhen playing, the method works fine and the nodes are added to the tree; however, I'm having trouble debugging because in the editor the Parts are there, but they don't appear in the node tree.\nI have added outputs to share whether the nodes are added to the tree, and have tested the path to the nodes too, which both appear to be correct - I just can't see them in the tree at all and therefore can't select them in the editor.\nTerrain_Manager.gd:\n@tool\nextends Node\nclass_name TerrainManager\n\n@export var terrain_parts : Array[PackedScene]\n@export var terrain_part_container : Node2D\n\nconst MAX_PARTS = 10\n\nvar end_position : Vector2 = Vector2(0, 0)\n\nfunc _ready() -> void:\n if Engine.is_editor_hint():\n return\n \n fill_container()\n\n## Add a new segment to the terrain\nfunc add_segment() -> void:\n var random_part = terrain_parts.pick_random().duplicate() as PackedScene\n var part = random_part.instantiate()\n part.name=\"TerrainPartInstance_%s\" % part.name\n\n ## Add to container\n terrain_part_container.add_child(part)\n\n ## Reposition to the end of the last segment\n if terrain_part_container.get_child_count() > 1:\n var _part = terrain_part_container.get_child(terrain_part_container.get_child_count() - 2) as TerrainPart\n end_position += _part.global_position_end - _part.global_position_start\n \n part.position = end_position + part.global_position_start\n\n## DEBUG TOOLS:\n@export var request_refresh : bool = false : set = _empty_and_fill_container\nfunc _empty_and_fill_container(_set : bool) -> void:\n _empty_container(true)\n fill_container()\n\n@export var request_empty : bool = false : set = _empty_container\nfunc _empty_container(_set : bool) -> void:\n for child in terrain_part_container.get_children():\n child.queue_free()\n \n end_position = Vector2(0, 0)\n\nTerrain_Part.gd:\n@tool\nextends Node\nclass_name TerrainPart\n\n@export var _anchor : SS2D_Shape_Anchor\n@export var _shape : SS2D_Shape_Open\n\nfunc _ready() -> void:\n if Engine.is_editor_hint():\n print(\"Checking that %s is in tree: %s\" % [self.name, self.is_inside_tree()])\n return\n\n\nvar global_position_start : Vector2 : get = _get_global_position_start\nfunc _get_global_position_start() -> Vector2:\n _validate()\n return _get_ss2d_point_at_index(0).position\n\nvar global_position_end : Vector2 : get = _get_global_position_end\nfunc _get_global_position_end() -> Vector2:\n _validate()\n return _get_ss2d_point_at_index(-1).position\n\n\nfunc round_first_point() -> void:\n var _first_point : SS2D_Point = _get_ss2d_point_at_index(0)\n _first_point.position = Vector2(round(_first_point.position.x), round(_first_point.position.y))\n\nfunc round_last_point() -> void:\n var _last_point : SS2D_Point = _get_ss2d_point_at_index(-1)\n _last_point.position = Vector2(round(_last_point.position.x), round(_last_point.position.y))\n\nfunc _get_ss2d_point_at_index(_index : int) -> SS2D_Point:\n return _shape._points._points[_shape._points._point_order[_index]]\n\n# TOOLS:\n@export var validate : bool = false : set = _validate\nfunc _validate(_val : bool = false) -> void:\n round_first_point()\n round_last_point()\n set_bezier_handles(-1, Vector2(-250,0), Vector2(250,0))\n set_bezier_handles(0, Vector2(-250,0), Vector2(250,0))\n\nfunc set_bezier_handles(_index : int, _value : Vector2, _value2 : Vector2) -> void:\n var _point : SS2D_Point = _get_ss2d_point_at_index(_index)\n _point.point_in = _value\n _point.point_out = _value2\n\nScreenshot of editor when running game (expected output - except the node names are not being changed correctly:\n\nOutput when using TerrainManager.request_refresh in editor:\nChecking that TerrainPartInstance_Part1 is in tree: true\nChecking that TerrainPartInstance_Part2 is in tree: true\nChecking that @Node2D@42549 is in tree: true\nChecking that @Node2D@42550 is in tree: true\nChecking that @Node2D@42551 is in tree: true\nChecking that @Node2D@42552 is in tree: true\nChecking that @Node2D@42553 is in tree: true\nChecking that @Node2D@42554 is in tree: true\nChecking that @Node2D@42555 is in tree: true\nChecking that @Node2D@42556 is in tree: true\n\nScreenshot of editor after running the above:"} +{"id": "000208", "text": "I am making an RTS game in godot4 4.2.1 Stable, just for shiggles. I am right clicking to perform a raycast. When I start the game and I right click, I am getting a duplicate input and it is giving me 2 seperate answers. Here is the code I am working with and its output.\nif Input.is_action_just_pressed(\"Right_Click\"):\n rayCast()\n if results.size() > 0 && ControlsMaster.selected_actors.size() > 0:\n print(\"RIGHT CLICKED at: \", round(results.position), \"\\n\")\n ControlsMaster.mouse_pos_ref = round(results.position)\n ControlsMaster.formation(\"square\", results)\n print(\"complete\\n\")\n results.clear()\n\nfunc rayCast():\n var worldspace = get_world_3d().direct_space_state\n var from = project_ray_origin(mouse_POS)\n var to = project_position(mouse_POS, 1000)\n results = worldspace.intersect_ray(PhysicsRayQueryParameters3D.create(from, to))\n\nThe raycast is coming from the mouse and projecting to the map perpendicular to the screen. The ouput that I get is the following. This output is coming from a single click.\nselected_actors array contains: [UNIT_0001:, UNIT_0011:]\n\nRIGHT CLICKED at: (-40, 8, -6)\n\nlead_actor_position(-40.12659, 7.526592, -6.26702)\nresult position(-40.12659, 7.526592, -6.26702)\n{ \"Position_0\": (-40.12659, 7.526592, -6.26702), \"Position_1\": (-45.12659, 0, -6.26702), \"Position_2\": (-35.12659, 0, -6.26702), \"Position_3\": (-40.12659, 0, -1.26702), \"Position_4\": (-40.12659, 0, -11.26702), \"Position_5\": (-45.12659, 0, -1.26702), \"Position_6\": (-35.12659, 0, -1.26702), \"Position_7\": (-45.12659, 0, -11.26702), \"Position_8\": (-35.12659, 0, -1.26702) }\ncomplete\n\nRIGHT CLICKED at: (-78, 0, -107)\n\nlead_actor_position(-78.43892, 0, -106.6246)\nresult position(-78.43892, 0, -106.6246)\n{ \"Position_0\": (-78.43892, 0, -106.6246), \"Position_1\": (-83.43892, 0, -106.6246), \"Position_2\": (-73.43892, 0, -106.6246), \"Position_3\": (-78.43892, 0, -101.6246), \"Position_4\": (-78.43892, 0, -111.6246), \"Position_5\": (-83.43892, 0, -101.6246), \"Position_6\": (-73.43892, 0, -101.6246), \"Position_7\": (-83.43892, 0, -111.6246), \"Position_8\": (-73.43892, 0, -101.6246) }\ncomplete\n\nIm not sure how to add the 2 scripts that would make up the context, I dont come here often. Sorry in advance.\nI have tried changing it to a key press or press and hold and it still does the double output.\nThe closer the raycast is to the center of the camera, the more accurate it is. The farther it is from 0,0,0 the further the actor spawns from where I actually did the mouse press at.\nI have also tried to condition it so that it would check a variable called was_mouse_pressed. If it was false, it would allow the click, if True, then it would not allow the click. When I did this, it would run through the code, set it to True and then run the code again with it being false at the start and then turning True again. Even though I never had a code line to reset the bool to false.\nI really don't understand and might switch to Unreal for the project. Unity isn't an option for me."} +{"id": "000209", "text": "In Godot, using C#, I'm trying to do something that I'd assume would be super simple, but I can't seem to get it to work the way I would like.\nI have struct SnapPoint that simply stores a Vector3 and a string.\npublic struct SnapPoint\n{\n public Vector3 Position;\n public string Name;\n\n}\n\nI then want to have an exported list of these SnapPoints that I can add to and remove from in the inspector. I've seen controls like this on other nodes, for example, the CSG nodes, so I assumed it would be possible.\nI've tried a few approaches. Some either didn't work or didn't work the way I'd like.\nMy first thought was to just export a list of these structs, but that doesn't work. Using C# collections or arrays results in a build error (The type of exported property is not supported) while using godot collections results in a Varient type error. I've also tried adding [System.Serializable] above the struct, though that didn't do anything. I also tried changing from a struct to a class, and that didn't help either.\nSo, I looked into using resources instead. I created a SnapPoint resource and have an export for that resource type. This works...but means I have to create a new resource for every snap point I have and is a bit of a pain.\nI looked into writing a custom plugin with a custom inspector and that approach looked like it could get me what I was looking for, but seemed overly verbose for something as simple as this. Perhaps it isn't simple, but it sure seems like it should be in my mind. I'm fine doing this, I just wanted to see if anyone knew of a better approach before I went down that path.\nSo, ultimately, is there an easy way to get the functionality I'm looking for or is this a limitation of Godot's inspector? Or am I missing something fundamentally? I come from Unity and within their inspector this task is trivial."} +{"id": "000210", "text": "I'm trying to capture images of MeshInstance3D via SubViewport and assigning them as textures to sprites as such:\n\n@tool\nextends Node2D\n\nconst PIXEL_PER_METER = 100.0\n@export var btn := false : set=set_btn\n\nfunc set_btn(new_val):\n if(new_val):\n assign_images()\n\nfunc assign_images():\n var sub_viewport=$SubViewport\n var camera=$SubViewport/Camera3D\n var mesh_instances=[ $SubViewport/A, $SubViewport/B, $SubViewport/C ]\n var sprites=[ $A, $B, $C ]\n \n sub_viewport.render_target_update_mode = SubViewport.UPDATE_ALWAYS\n camera.projection = Camera3D.PROJECTION_ORTHOGONAL\n \n for i in len(mesh_instances):\n var sprite=sprites[i]\n var mesh=mesh_instances[i]\n var mesh_aabb=mesh.get_aabb()\n camera.size = max(abs(mesh_aabb.size.x), abs(mesh_aabb.size.y))\n camera.global_position = Vector3(mesh.global_position.x, mesh.global_position.y, camera.global_position.z)\n sprite.position = Vector2(mesh.position.x * PIXEL_PER_METER, mesh.position.y * -PIXEL_PER_METER)\n #await get_tree().process_frame # this doesn't work either\n sprite.texture=ImageTexture.create_from_image(sub_viewport.get_texture().get_image())\n\nHow ever this doesn't work, it doesn't capture the right image, nor does it place the sprite at right position relative to it's mesh counterpart:\n\nFor the image capture part, I suspect SubViewport & Camera3D isn't being updated quickly enough hence it captures the wrong image.\nAs for the position part, I believe 1 m in 3D translates to 100 pixels in 2D, maybe due to the unit differences the size & position of image is wrong.\nSo how do I implement this properly?\nNote: I'm open to an alternate approach as long as it gives the same result, and no MeshInstance2D does not work\nMinimal reproduction project (MRP)"} +{"id": "000211", "text": "I've set up a basic player.\nIt works when I place only one instance of it in a scene. I need two instances due to local multiplayer.\nHowever, once I place two instances of it in a scene, it just moves out of my vision.\nBy observing where it's positioned using a basic print function, it starts moving in random directions crazily super fast for a few seconds, and then it's movement becomes normal.\nHere's my code for it.\nextends CharacterBody2D\n\n@export var speed = 100\n@export var dash_speed = 300\n@export var prefix = ''\nvar dashing = false\n\n@export var texture: Texture2D:\n set(v): \n $Sprite2D.set_texture(v)\n\nfunc _ready():\n $CPUParticles2D.emitting = false\n\nfunc get_input():\n var input_direction = Input.get_vector(prefix + \"left\", prefix + \"right\", prefix + \"up\", prefix + \"down\")\n velocity = input_direction * speed\n if Input.is_action_pressed(prefix + \"dash-jump\"):\n velocity += input_direction * dash_speed\n dashing = true\n\nfunc _physics_process(_delta):\n get_input()\n if dashing:\n $CPUParticles2D.process_material.gravity = Vector3(-velocity.x, -velocity.y, 0) / 2\n $CPUParticles2D.restart()\n move_and_slide()\n dashing = false\n\nI don't know if the problem's in move_and_slide or something else. Don't mind the particles, and the input prefix is so the second player can use different input actions from the first one and be independent.\nBy the way, the texture variable exists so I can easily change it from Inspector, in case I wanna have a different sprite for player 2."} +{"id": "000212", "text": "I would like to export my project as a web build, for which I am following the example given in these tutorials: tutorial1 tutorial2. This seems routine enough, but when I try to reproduce this, I get the following messages:\n\n-Exporting to Web is currently not supported in Godot 4 when using C#/.NET. Use Godot 3 to target Web with C#/Mono instead.\n\n\n-If this project does not use C#, use a non-C# editor build to export the project.\n\nScreenshot of the errors\nMy project does not use C#. I am not sure what it means with a \"non-C# editor build\" and could not find anything about it.\nThank you for checking this out.\nI am at a loss for what to do. I don't know what a \"non-C# editor build\" is."} +{"id": "000213", "text": "I currently develop a game where the player moves a camera over a tilemap map to navigate. Unfortunatley as shown in the video below, the textures look weird when the camera is moving and I don\u2019t know how to make them look sharp.\nvideo: https://www.loom.com/share/f98c47207a0b4dcaaeae621ef1b028d9?sid=a6128b25-aa58-43ce-a140-26a17a07b743\nCode of camera scene:\nextends Camera2D\n\n# Zoom variables\nvar min_zoom = Vector2(1, 1)\nvar max_zoom = Vector2(2, 2)\nvar desired_zoom = zoom\n# Movement variables\nvar camera_speed = 500\n# Map boundaries\nvar min_x_pos = 0\nvar max_x_pos = 2560\nvar min_y_pos = 0\nvar max_y_pos = 1360\n\n\nfunc _physics_process(delta):\n if global_position != get_camera_position(delta):\n global_position = get_camera_position(delta)\n# var tween = create_tween()\n# tween.tween_property(self, \"global_position\", get_camera_position(delta), .2)\n\n\nfunc _unhandled_input(event):\n if event.is_action_pressed(\"zoom_in\"):\n desired_zoom = zoom + Vector2(.25, .25)\n if desired_zoom <= max_zoom and desired_zoom >= min_zoom:\n var tween = create_tween()\n tween.tween_property(self, \"zoom\", desired_zoom, .08)\n elif event.is_action_pressed(\"zoom_out\"):\n desired_zoom = zoom - Vector2(.25, .25)\n if desired_zoom <= max_zoom and desired_zoom >= min_zoom:\n var tween = create_tween()\n tween.tween_property(self, \"zoom\", desired_zoom, .08)\n\n\nfunc get_camera_position(delta):\n var new_camera_position = global_position\n if Input.is_action_pressed(\"ui_right\") and new_camera_position.x <= 1984:\n new_camera_position.x += camera_speed * delta\n if Input.is_action_pressed(\"ui_left\") and new_camera_position.x >= 400:\n new_camera_position.x -= camera_speed * delta\n if Input.is_action_pressed(\"ui_up\") and new_camera_position.y >= 292:\n new_camera_position.y -= camera_speed * delta\n if Input.is_action_pressed(\"ui_down\") and new_camera_position.y <= 1068:\n new_camera_position.y += camera_speed * delta\n return new_camera_position"} +{"id": "000214", "text": "I have a parent class (not node):\nclass_name Dragon\n\nextends Area2D\n\n@onready var dragon_frames = $DragonFrames\n\nfunc _ready():\n rotation += deg_to_rad(60)\n\nThe inheriting class:\nclass_name FireDragon\n\nextends Dragon\n\nfunc _ready():\n dragon_frames.play(\"flying_red_dragon\")\n\nThe FireDragon node is not rotating. The animation works fine. \nThis works as expected:\nclass_name FireDragon\n\nextends Dragon\n\nfunc _ready():\n dragon_frames.play(\"flying_red_dragon\")\n rotation += deg_to_rad(60)\n\nDo some attributes not work with inheritance?\nI'm using godot 4.2.1"} +{"id": "000215", "text": "I have a scene 'DragonSelector' which extends Area2D. The scene has a Sprite2D node called 'DragonSprite'.\nDragonSelector.gd\nclass_name DragonSelector\nextends Area2D\n\n@onready var dragon_sprite = $DragonSprite\n\nfunc _ready():\n pass \n\nfunc _process(delta):\n pass\n\nfunc set_texture(dragon_texture: Texture):\n dragon_sprite.Texture = dragon_texture\n\nI want to instantiate this scene in 'Main' scene and set the Texture on 'DragonSprite' in runtime.\nMain.gd\nclass_name Main\nextends Node2D\n\nvar dragon_selector_scene = preload(\"res://dragon_selector/dragon_selector.tscn\")\n\nfunc _ready():\n var dragon_selector_instance = dragon_selector_scene.instantiate()\n var texture = preload(\"res://dragon/flying_dragon-gold.png\")\n dragon_selector_instance.set_texture(texture)\n add_child(dragon_selector_instance)\n dragon_selector_instance.position = Vector2(100, 100) \n \nfunc _process(delta):\n pass\n\nThis yields an error: Invalid set index 'Texture' (on base: 'Nil') with value of type 'CompressedTexture2D'. This is associated with the texture assignment line in set_texture:\n\nTo validate that the texture is where I think it is, I changed function set_texture to load the texture directly from within DragonSelector.gd:\nfunc set_texture(dragon_texture: Texture):\n dragon_sprite.Texture = preload(\"res://dragon/flying_dragon-gold.png\")\n #dragon_sprite.Texture = dragon_texture\n\nThis works as expected. Apologies for the vague title, I have no idea what to refer this to. Suggestions for correction are welcome.\nUsing godot 4.2.1"} +{"id": "000216", "text": "I have a game_board scene with only a root Node2D and no children.\nI wish to programmatically create playing slots on this board.\nThese slots are to be instantiated from a separate scene file.\nSince GDScript doesn't have a constructor, I wrote a static constructor initializing the required members (currently just a slot_id: int)\ngame_board.gd\nextends Node2D\n\nconst play_slot_scene = preload(\"res://scenes/PlaySlot/play_slot.tscn\")\n\nfunc _ready():\n if not play_slot_scene.can_instantiate():\n push_error(\"Couldn't instantiate play slot\")\n\n var firstSlot: PlaySlot = play_slot_scene.instantiate()\n firstSlot.slot_id = 0;\n \n add_child(firstSlot)\n \n for n in range(1,7):\n var nextChild = firstSlot.constructor(n)\n add_child(nextChild)\n\nplay_slot.gd\nclass_name PlaySlot\nextends Node2D\n\nconst self_scene = preload(\"res://scenes/PlaySlot/play_slot.tscn\")\n\n@export var slot_id: int\n\nstatic func constructor(id: int = 0)-> PlaySlot:\n var obj = self_scene.instantiate()\n \n obj.slot_id = id\n \n return obj\n\nThe code errors out at var firstSlot: PlaySlot = play_slot_scene.instantiate() because firstSlot is a Node2d obj (not an instance of PlaySlot class).\nIf I remove the static typing, the next line fails because slot_id does not exist on Node2D.\nHow do I instantiate these nodes with the right class?\nTIA"} +{"id": "000217", "text": "I am currently trying my hand at a 3D ARPG in Godot 4.2.1 (and 4.2.2) and have a problem with collision detection. The collision between my projectile (Area3D with CollisionShape3D) collides with an enemy (CharacterBody3D) as expected, but not with instances of StaticBody3D or CSGBox3D.\nSetup:\n\nCollision Layer and Mask on Projectile and all Targets set to 1\nbody entered and area entered signals on the projectile are connected to respective functions in the script projectiles script\nMonitoring is enabled\n\nI am wondering if the problem is that the projectile is an Area3D? Or am I missing something obvious?\nThanks in advance.\nWhat i tried:\n\nchanging collision layers and masks\nchanging the speed of the projectile to a stupendously low value\nchanging the root node of the projectile to a RigidBody3D (resulted in no collision at all, not even with the enemy"} +{"id": "000218", "text": "I have a main scene and secondary scene. I want to create multiple instances of the secondary scene. The secondary scene will emit a signal with information which is specific to it. The main scene will detect the signal and act on the information.\nThe main scene is empty. The secondary scene is made of an icon and collision2D to allow detection of mouse clicks:\n\nSecondary.gd:\nclass_name Secondary extends Area2D\n@onready var sprite_2d = $Sprite2D\n\nsignal secondary_clicked(value)\nvar information\n\nfunc _input(event):\n if event.is_action_pressed(\"mouse_left_click\"):\n secondary_clicked.emit(information)\n\nMain.gd:\nclass_name Main extends Node2D\n\n# Called when the node enters the scene tree for the first time.\nfunc _ready():\n var secondary_scene = preload(\"res://Secondary.tscn\")\n var secondary_instance = secondary_scene.instantiate()\n add_child(secondary_instance)\n secondary_instance.information = \"1st\"\n secondary_instance.position = Vector2(510, 320)\n secondary_instance.secondary_clicked.connect(handle_signal)\n \n secondary_instance = secondary_scene.instantiate()\n add_child(secondary_instance)\n secondary_instance.information = \"2nd\"\n secondary_instance.position = Vector2(710, 320)\n secondary_instance.secondary_clicked.connect(handle_signal) \n\nfunc handle_signal(value):\n print(\"The value from the scene: \" + value)\n\nI'm expecting that mouse click will result in \"The value from the scene: 1st\" or \"The value from the scene: 2nd\" depending on which instance I clicked.\nThe actual result is that I get two prints, regardless of which one I clicked. This output was made by a single click:"} +{"id": "000219", "text": "In the 3d fps tutorial godot documentation, there is a line of code near the bottom that's outdated. I am new to coding with godot, I have a little bit of experience with java script but that's it. The line of code looks like this.\nVVV\nvel = move_and_slide(vel, Vector3(0, 1, 0), 0.05, 4, deg2rad(MAX_SLOPE_ANGLE))\nI know to fix the deg2rad part, it should be deg_to_rad, but once I do that it gives me this error.\nToo many arguments for \"move_and_slide()\" call. Expected at most 0 but received 5.\nI don't know what to try because like I said I am new to coding with godot, I understand a little bit of GDScript, but not enough to fix this problem on my own."} +{"id": "000220", "text": "It worked once, but in a wrong place. I changed some container size flags, and now it's been 3 hours since all went wrong(\nThe problem:\nI have NinePatchRect (1) inside another NinePatchRect (2) inside GridContainer. I need to pass outer (2) rect's size to the inner (1) one. Both NinePatchRects have textures set at runtime. As I'm working within a container the only way to get starting size would be to set an anchor for (2), otherwise the size would equal to (0, 0). And I can actually see the size.x value change in the editor after assigning the anchors. BUT, once the code is executed, by the time I get to the size.x value, it becomes 0. Even though the texture is placed and shown correctly.\nNPR (2) has following size flags:\nHorizontal Fill Expand\nVertical Fill\nNPR (1) is set to layout_mode POSITION at the position of NPR (2). If I use any anchors, I won't be able to change the size of the rect (which I need to do dynamically).\nI actually have a RichTextLabel (of the same size I could use) nearby within the same GridContainer, and I can't get its size value as well, as it turns to (0, 0).\nQuestion:\nHow do I get the size of GridContainer's child?\nEdit:\nI did find this post on godot forums which seems to be what I need, but the trick with idle frame doesn't work, as I still get size (0, 0)"} +{"id": "000221", "text": "I'm currently in the process of making a screen that has ability slots, and it is up to the player to mix and match them and change the order at will. The Drag and Drop functionality worked just fine when it was one-way, but when I tried adding the process of actually swapping the data between the two nodes (should both have any), I keep running into errors.\nI have a custom resource called Ability and the slots have a variable called \"selected_ability\" that takes these and uses it to set the textures and other information.\nextends TextureRect\nclass_name AbilitySlot\n\n@export var selected_ability: Ability\n\n@onready var slot: AbilitySlot = $\".\"\n\n\nfunc _process(_delta):\n if selected_ability:\n slot.texture = selected_ability.icon\n else:\n slot.texture = null\n \nfunc _get_drag_data(_at_position):\n set_drag_preview(get_preview())\n global.origin_slot = self.get_name()\n global.origin_ability = self.selected_ability\n print(global.origin_slot)\n print(global.origin_ability)\n return slot.selected_ability\n \nfunc get_preview():\n var preview_texture = TextureRect.new()\n if selected_ability:\n preview_texture.texture = slot.selected_ability.icon\n preview_texture.expand_mode = 1\n preview_texture.size = Vector2(64,64)\n var preview = Control.new()\n preview.add_child(preview_texture)\n \n return preview\n \nfunc _can_drop_data(_at_position, data):\n global.target_slot = get_name()\n if selected_ability:\n global.target_ability = selected_ability\n else:\n global.target_ability = null\n print(global.target_slot)\n print(global.target_ability)\n return true\n\nfunc _drop_data(_at_position, data):\n global.origin_slot.selected_ability = global.target_ability < ERROR\n selected_ability = global.origin_ability\n\n\nThe program crashes at Line 45 (the second from the bottom) with the error \"Invalid set index 'selected_ability' (on base: 'StringName') with value of type 'Resource (Ability)'.\"\nIf I comment the problematic line out, the program successfully copies the desired contents of the origin slot to the target slot, but the origin slot remains unchanged, so it acts as an infinite duplicate source. I know this because the abilities that slots set up successfully pass onto the next screen, it is only the swap that doesn't work.\nWhat would be the correct way to handle the origin slot if I want to switch data?"} +{"id": "000222", "text": "Is there a way to use gdscript to obtain the absolute path of \"res://\" on the current device?\nI have encountered some issues that require the use of some code from C #, but the paths \"res://\" and \"users://\" cannot be used in C # (non godot API).\nI want to know if there is a gdscript method that can obtain the absolute paths of these two paths.\nlike :\n var dir = DirAccess.open(\"res://assets/\")\n var path = dir.get_current_dir_absolute() # there is no function named that\n print_debug(path) # out put: D://.../assets\n\nor something else.\nI tried all the methods of the DirAccess class, but none of them worked.\nIf someone could tell me how to do this, I would be very grateful!"} +{"id": "000223", "text": "Firstly, gotta make sure AnimatedSprite2D is the right node to use. I'm making one of those random clicker games where you click a button and get a random chance of getting different items. I need certain items to be rarer than others, and the sprite to show the item you get, but also be able to send info such as that item's rarity, name, etc.\nRight now I was thinking of having a bunch of one frame animations on one animated sprite and then using the name of the animation to find the metadata.\nIf AnimatedSprite2D is the right thing to use, what would the script look like?"} +{"id": "000224", "text": "I use the following code to display a dragged item preview:\nfunc _get_drag_data(at_position: Vector2) -> Variant:\n var drag_preview := TextureRect.new()\n drag_preview.expand_mode = TextureRect.EXPAND_IGNORE_SIZE\n drag_preview.texture = icon_texture_rect.texture\n drag_preview.custom_minimum_size = Vector2(80, 80)\n drag_preview.modulate = Color(1, 1, 1, 0.75)\n set_drag_preview(drag_preview)\n # return this item slot as the thing to be dragged\n return self\n\nIt works, but the dragged rectangle is being dragged by its top-left corner, while I\u2019d like the mouse pointer to drag its middle:\n\nDoes anybody please have a suggestion on how to implement this?\nI have not found a suitable method in TextureRect and Control docs."} +{"id": "000225", "text": "Godot 4.2.2\nOS: Mac 14.5 (23F79)\nChip: Apple M2 Pro\nI tried to create image dynamically and save it as .jpg image.\nfunc _ready():\n var user_data_dir = OS.get_user_data_dir()\n print(\"User data directory:\", user_data_dir)\n\n # Create a new image with specified width, height, and format\n var image = Image.new()\n image.create(256, 256, false, Image.FORMAT_RGB8) # Creating a 256x256 image with RGB format\n \n # Fill the image with black\n image.fill(Color.BLACK) # Fill the entire image with black color\n \n # Check if the image is empty (filled it with black); i don't think this is necessary but just checking\n var is_empty = true\n for y in range(image.get_height()):\n for x in range(image.get_width()):\n var pixel_color = image.get_pixel(x, y)\n if pixel_color != Color.TRANSPARENT: # Check against black color with transparent\n is_empty = false\n break\n if not is_empty:\n break\n\n if is_empty:\n print(\"The image is empty.\")\n else:\n print(\"The image is not empty.\")\n\n # Save the image as a JPG file\n var file_path_jpg = user_data_dir + \"/black_image2.jpg\"\n var result_jpg = image.save_jpg(file_path_jpg)\n\n if result_jpg == OK:\n print(\"JPG saved successfully at: \", file_path_jpg)\n else:\n print(\"Failed to save JPG. Error code: \", str(result_jpg))\n\n # Verify if the file exists\n var jpg_exists = FileAccess.file_exists(file_path_jpg)\n print(\"JPG exists: \", str(jpg_exists))\n\nOutput Log:\nUser data directory:/Users/xxxx/Library/Application Support/Godot/app_userdata/game\nThe image is empty.\nFailed to save JPG. Error code: 31\nJPG exists: true\n\neven thought file created its empty.\nI tried to create an RGB8 image, fill it with color, and save it. But for some reason, the saved file is empty. Any idea why this is happening?\nNote: I\u2019ve been learning Godot for the past two weeks after switching from Unity, so if this is a newbie question, please bear with me. Thanks!\nI also tried to save the png file it didn't even create the png file but i tried with .jpg it created file but with empty data. All the code i have written is in detail section and output log.\nI want to create image, write something in image and save that image that's it."} +{"id": "000226", "text": "MultiplayerSychronizers keep data updated between players continuously as a game goes on, but is there a way to transfer entire nodes to all the connected clients exactly once? I have a MultiMeshInstance3D that I want to generate once and then never update again, but I don't want to have to individually generate it on each client's computer. Is there a way I can make an override @rpc version of add_child or something?\nThe MultiMeshInstance3D is a PackedScene, if that's pertinent. I tried adding a MultiplayerSychronizer child that synced the multimesh variable of scene root, but it threw an error from a .cpp file from deep within the base Godot files:\n\nE 0:00:13:0803 get_node: Node not found: \"Manager/GameBoard/Map/TileMap/MultiplayerSynchronizer\" (relative to \"/root\").\n Method/function failed. Returning: nullptr\n scene/main/node.cpp:1364 @ get_node()"} +{"id": "000227", "text": "I have a packed scene whose layout is like so:\nControl\n- Label\n- Label\n- MenuButton\n- RichTextLabel\n- SubViewportContainer\n- - SubViewport\n- - - MeshInstance3D\n- - - MeshInstance3D\n- - - Camera3D\n\nI instantiate instances of the scene, fill out the necessary details (text for labels, mesh for mesh instances, etc), then add them as children to a VBoxContainer who is the child of a ScrollContainer, but they are only offset by a few pixels, not the 100 pixels the Control node's size.y is set to. I tested it by manually adding instances of the Packed Scene and the same issue occurs. Why is this? How do I fix it?"} +{"id": "000228", "text": "Need some help.\nI have the following json content in a file and would like to use langchain.js and gpt to parse , store and answer question such as\nfor example:\n\"find me jobs with 2 year experience\" ==> should return a list\n\"I have knowledge in javascript find me jobs\" ==> should return the jobs pbject\nI use langchain json loader and I see the file is parse but it say that it find 13 docs . There is only be 3 docs in file . Is the json structure not correct?\nHere is snippet of my parse code\nconst loader = new DirectoryLoader(docPath, {\n \".json\": (path) => new JSONLoader(path),\n});\n\nconst docs = await loader.load();\nconsole.log(docs);\nconsole.log(docs.length);\n\nHere is my input data\n[\n {\n \"jobid\":\"job1\",\n \"title\":\"software engineer\"\n \"skills\":\"java,javascript\",\n \"description\":\"this job requires a associate degrees in CS and 2 years experience\"\n },\n {\n \"jobid\":\"job2\",\n \"skills\":\"math, accounting, spreadsheet\",\n \"description\":\"this job requires a degrees in accounting and 2 years experience\"\n },\n {\n \"jobid\":\"job3\",\n \"title\":\"programmer\"\n \"skills\":\"java,javascript,cloud computing\",\n \"description\":\"this job requires a ,master degrees in CS and 3 years experience\"\n }\n \n]\n\nOUTPUT\n[\n Document {\n pageContent: 'job1',\n metadata: {\n source: 'langchain-document-loaders-in-node-js/documents/jobs.json',\n line: 1\n }\n },\n Document {\n pageContent: 'software engineer',\n metadata: {\n source: 'langchain-document-loaders-in-node-js/documents/jobs.json',\n line: 2\n }\n },\n Document {\n pageContent: 'java,javascript',\n metadata: {\n source: 'langchain-document-loaders-in-node-js/documents/jobs.json',\n line: 3\n }\n },\n Document {\n pageContent: 'this job requires a associate degrees in CS and 2 years experience',\n metadata: {\n source: 'langchain-document-loaders-in-node-js/documents/jobs.json',\n line: 4\n }\n },\n Document {\n pageContent: 'job2',\n metadata: {\n source: 'langchain-document-loaders-in-node-js/documents/jobs.json',\n line: 5\n }\n },\n\n..."} +{"id": "000229", "text": "I am playing with langchain/openai/faiss to create chatbot that reads all PDFs, and can answer based on what it learned from them.\nWhat I want to know is there a way to limit answers to knowledge only from documentation, if answer is not in docs bot should respond I do not know or something like that.\nHere is the code:\n llm = ChatOpenAI(temperature=0, max_tokens=1000,\n model_name=\"gpt-3.5-turbo-16k\")\n memory = ConversationBufferMemory(memory_key=\"chat_history\")\n chat = ConversationalRetrievalChain.from_llm(\n llm=llm,retriever=vector_store.as_retriever(),memory=memory)\n \n if \"messages\" not in st.session_state:\n st.session_state.messages = []\n\n if not st.session_state.messages:\n welcome_message = {\"role\": \"assistant\",\n \"content\": \"Hello, how can i help?\"}\n st.session_state.messages.append(welcome_message)\n\n for message in st.session_state.messages:\n with st.chat_message(message[\"role\"]):\n st.markdown(message[\"content\"])\n\n\n if prompt := st.chat_input(\"State your question\"):\n st.session_state.messages.append({\"role\": \"user\", \"content\": prompt})\n with st.chat_message(\"user\"):\n st.markdown(prompt)\n result = chat({\"question\": prompt, \"chat_history\": [\n (message[\"role\"], message[\"content\"]) for message in st.session_state.messages]})\n\n with st.chat_message(\"assistant\"):\n full_response = result[\"answer\"]\n st.markdown(full_response)\n\n st.session_state.messages.append(\n {\"role\": \"assistant\", \"content\": full_response})"} +{"id": "000230", "text": "I just have a newly created Environment in Anaconda (conda 22.9.0 and Python 3.10.10). Then I proceed to install langchain (pip install langchain if I try conda install langchain it does not work). According to the quickstart guide I have to install one model provider so I install openai (pip install openai).\nThen I enter to the python console and try to load a PDF using the class UnstructuredPDFLoader and I get the following error. What the problem could be?\n(langchain) C:\\Users\\user>python\nPython 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] on win32\n>>> from langchain.document_loaders import UnstructuredPDFLoader\n>>> loader = UnstructuredPDFLoader(\"C:\\\\\\\\data\\\\name-of-file.pdf\")\nTraceback (most recent call last):\n File \"C:\\\\envs\\langchain\\lib\\site-packages\\langchain\\document_loaders\\unstructured.py\", line 32, in __init__\n import unstructured # noqa:F401\nModuleNotFoundError: No module named 'unstructured'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"C:\\\\envs\\langchain\\lib\\site-packages\\langchain\\document_loaders\\unstructured.py\", line 90, in __init__\n super().__init__(mode=mode, **unstructured_kwargs)\n File \"C:\\\\envs\\langchain\\lib\\site-packages\\langchain\\document_loaders\\unstructured.py\", line 34, in __init__\n raise ValueError(\nValueError: unstructured package not found, please install it with `pip install unstructured`"} +{"id": "000231", "text": "I was following a tutorial on langchain, and after using loader.load() to load a PDF file, it gave me an error and suggested that some dependencies are missing and I should install them using pip install unstructured[local-inference]. So, I did. But it is now installing a whole lot of packages. A whole lot of it includes some packages to do with nvidia-*. Can someone please explain what this command does? It took a good couple of hours for this command to complete."} +{"id": "000232", "text": "I am trying to set \"gpt-3.5-turbo\" model in my OpenAI instance using langchain in node.js, but below way sends my requests defaultly as text-davinci model.\nconst { OpenAI } = require(\"langchain/llms\");\nconst { ConversationChain } = require(\"langchain/chains\");\nconst { BufferMemory } = require(\"langchain/memory\");\n\nconst model = new OpenAI({ model:\"gpt-3.5-turbo\", openAIApiKey: \"###\", temperature: 0.9 });\nconst memory = new BufferMemory();\n\nconst chain = new ConversationChain({llm:model, memory: memory});\n\nasync function x(){\nconst res = await chain.call({input:\"Hello this is xyz!\"});\nconst res2 = await chain.call({input:\"Hello what was my name?\"});\nconsole.log(res);\nconsole.log(res2);\n}\n\nx();\n\nOn documentation, i found the way to setting model with python. It sets with model_name attribute on the instance. But this way doesn't work with nodejs. Is there any way to setting custom models with langchain node.js ?"} +{"id": "000233", "text": "Question #1:\nIs there a way of using Mac with M1 CPU and llama_index together?\nI cannot pass the bellow assertion:\nAssertionError Traceback (most recent call last)\n in \n 6 from transformers import pipeline\n 7 \n----> 8 class customLLM(LLM):\n 9 model_name = \"google/flan-t5-large\"\n 10 pipeline = pipeline(\"text2text-generation\", model=model_name, device=0, model_kwargs={\"torch_dtype\":torch.bfloat16})\n\n in customLLM()\n 8 class customLLM(LLM):\n 9 model_name = \"google/flan-t5-large\"\n---> 10 pipeline = pipeline(\"text2text-generation\", model=model_name, device=0, model_kwargs={\"torch_dtype\":torch.bfloat16})\n 11 \n 12 def _call(self, prompt, stop=None):\n\n~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)\n 868 kwargs[\"device\"] = device\n 869 \n--> 870 return pipeline_class(model=model, framework=framework, task=task, **kwargs)\n\n~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/text2text_generation.py in __init__(self, *args, **kwargs)\n 63 \n 64 def __init__(self, *args, **kwargs):\n---> 65 super().__init__(*args, **kwargs)\n 66 \n 67 self.check_model_type(\n\n~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/base.py in __init__(self, model, tokenizer, feature_extractor, modelcard, framework, task, args_parser, device, binary_output, **kwargs)\n 776 # Special handling\n 777 if self.framework == \"pt\" and self.device.type != \"cpu\":\n--> 778 self.model = self.model.to(self.device)\n 779 \n 780 # Update config with task specific parameters\n\n~/Library/Python/3.9/lib/python/site-packages/transformers/modeling_utils.py in to(self, *args, **kwargs)\n 1680 )\n 1681 else:\n-> 1682 return super().to(*args, **kwargs)\n 1683 \n 1684 def half(self, *args):\n\n~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)\n 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\n 1144 \n-> 1145 return self._apply(convert)\n 1146 \n 1147 def register_full_backward_pre_hook(\n\n~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in _apply(self, fn)\n 795 def _apply(self, fn):\n 796 for module in self.children():\n--> 797 module._apply(fn)\n 798 \n 799 def compute_should_use_set_data(tensor, tensor_applied):\n\n~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in _apply(self, fn)\n 818 # `with torch.no_grad():`\n 819 with torch.no_grad():\n--> 820 param_applied = fn(param)\n 821 should_use_set_data = compute_should_use_set_data(param, param_applied)\n 822 if should_use_set_data:\n\n~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in convert(t)\n 1141 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,\n 1142 non_blocking, memory_format=convert_to_format)\n-> 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\n 1144 \n 1145 return self._apply(convert)\n\n~/Library/Python/3.9/lib/python/site-packages/torch/cuda/__init__.py in _lazy_init()\n 237 \"multiprocessing, you must use the 'spawn' start method\")\n 238 if not hasattr(torch._C, '_cuda_getDeviceCount'):\n--> 239 raise AssertionError(\"Torch not compiled with CUDA enabled\")\n 240 if _cudart is None:\n 241 raise AssertionError(\n\nAssertionError: Torch not compiled with CUDA enabled\n\nObviously I've no Nvidia card, but I've read Pytorch is now supporting Mac M1 as well\nI'm trying to run the below example:\nfrom llama_index import SimpleDirectoryReader, LangchainEmbedding, GPTListIndex,GPTSimpleVectorIndex, PromptHelper\nfrom langchain.embeddings.huggingface import HuggingFaceEmbeddings\nfrom llama_index import LLMPredictor, ServiceContext\nimport torch\nfrom langchain.llms.base import LLM\nfrom transformers import pipeline\n\nclass customLLM(LLM):\n model_name = \"google/flan-t5-large\"\n pipeline = pipeline(\"text2text-generation\", model=model_name, device=0, model_kwargs={\"torch_dtype\":torch.bfloat16})\n\n def _call(self, prompt, stop=None):\n return self.pipeline(prompt, max_length=9999)[0][\"generated_text\"]\n \n def _identifying_params(self):\n return {\"name_of_model\": self.model_name}\n\n def _llm_type(self):\n return \"custom\"\n\n\nllm_predictor = LLMPredictor(llm=customLLM())\n\nQuestion #2:\nAssuming the answer for the above is no - I don't mind using Google Colab with GPU, but once the index will be made, will it be possible to download it and use it on my Mac?\ni.e. something like:\non Google Colab:\nservice_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, embed_model=embed_model)\nindex = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)\nindex.save_to_disk('index.json')\n\n... and later on my Mac use load_from_file"} +{"id": "000234", "text": "I have a quick question: I'm using the Chroma vector store with LangChain.\nAnd I brought up a simple docsearch with Chroma.from_texts. I was initially very confused because i thought the similarity_score_with_score would be higher for queries that are close to answers, but it seems from my testing the opposite is true. Is this becasue it's returning the 'distance' between the two vectors when it searches? I was looking at docs but it only says \"List of Documents most similar to the query and score for each\" but doesnt explain what 'score' is\nDoc reference https://python.langchain.com/en/latest/reference/modules/vectorstores.html?highlight=similarity_search#langchain.vectorstores.Annoy.similarity_search_with_score Can also give more info on the (small to start) dataset im using and queries i tested with."} +{"id": "000235", "text": "I am running a langChain process on a node local server.\nOn my code :\n // Create docs with a loader\nconst loader = new TextLoader(\"Documentation/hello.txt\");\nconst docs = await loader.load();\n\n// Create vector store and index the docs\nconst vectorStore = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), {\ncollectionName: \"z-test-collection\",\n});\n\n// Search for the most similar document\nconst response = await vectorStore.similaritySearch(\"hello\", 1);\n\nconsole.log(response);\n\nI get the following error message on\nconst vectorStore = await Chroma.fromDocuments(docs, new OpenAIEmbeddings(), {\ncollectionName: \"z-test-collection\",\n});:\n/home/alexandre/projects/langChain/ProcessGPT/node_modules/chromadb/dist/main/index.js:291\n return response.data;\n ^\nTypeError: Cannot read properties of undefined (reading 'data')\nat /home/alexandre/projects/langChain/ProcessGPT/node_modules/chromadb/dist/main/index.js:291:29\nat process.processTicksAndRejections (node:internal/process/task_queues:95:5)\nat async ChromaClient.getOrCreateCollection (/home/alexandre/projects/langChain/ProcessGPT/node_modules/chromadb/dist/main/index.js:286:31)\nat async Chroma.ensureCollection (/home/alexandre/projects/langChain/ProcessGPT/node_modules/langchain/dist/vectorstores/chroma.cjs:60:31)\nat async Chroma.addVectors (/home/alexandre/projects/langChain/ProcessGPT/node_modules/langchain/dist/vectorstores/chroma.cjs:77:28)\nat async Chroma.addDocuments (/home/alexandre/projects/langChain/ProcessGPT/node_modules/langchain/dist/vectorstores/chroma.cjs:52:9)\nat async Chroma.fromDocuments (/home/alexandre/projects/langChain/ProcessGPT/node_modules/langchain/dist/vectorstores/chroma.cjs:121:9)\nat async testChroma (/home/alexandre/projects/langChain/ProcessGPT/controllers/backendController.js:31:25)\n\nThe same error message appears regardless of the situation in which the method is called.\nAre there other requirements appart from the \"npm install -S langchain\" and \"npm install -S chromadb\" ?\nThank you in advance"} +{"id": "000236", "text": "I'm following a tutorial on HuggingFace (let's say this one though getting same result with other Dolly models). I am trying to run predictions with context but receiving empty string as an output. I tried different models and text variations.\nRegular question answering works as expected. Only breaks when using questions about the context.\nWhat could be the issue here?\ncontext = \"\"\"George Washington (February 22, 1732[b] \u2013 December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.\"\"\"\nllm_context_chain.predict(instruction=\"When was George Washington president?\", context=context)\nOut[5]: ''\nPS: I'm using GPU cluster on Azure Databricks if that matters"} +{"id": "000237", "text": "I'm trying to implement a langchain agent that is able to ask clarifying questions in case some information is missing. Is this at all possible? A simple example would be\nInput: \"Please give me a recipe for a cake\"\nAgent: \"Certainly. What kind of cake do you have in mind?\"\nInput: \"A chocolate cake\"\nAgent: \"Certainly, here is a recipe for a chocolate cake...\""} +{"id": "000238", "text": "I am experimenting with langchains and its applications, but as a newbie, I could not understand how the embeddings and indexing really work together here. I know what these two are, but I can't figure out a way to use the index that I created and saved using persist_directory.\nI succesfully saved the object created by VectorstoreIndexCreator using the following code:\nindex = VectorstoreIndexCreator(vectorstore_kwargs={\"persist_directory\":\"./custom_save_dir_path\"}).from_loaders([loader])\n\nbut I cannot find a way to use the .pkl files created. How can I use these files in my chain to retrieve data?\nAlso, how does the billing in openAI work? If I cannot use any saved embeddings or index, will it re-embed all the data every time I run the code?\nAs a beginner, I am still learning my way around and any assistance would be greatly appreciated.\nHere is the full code:\nfrom langchain.document_loaders import CSVLoader\nfrom langchain.indexes import VectorstoreIndexCreator\nfrom langchain.chains import RetrievalQA\nfrom langchain.llms import OpenAI\nimport os\nos.environ[\"OPENAI_API_KEY\"] = \"sk-xxx\"\n# Load the documents\nloader = CSVLoader(file_path='data/data.csv')\n\n#creates an object with vectorstoreindexcreator\nindex = VectorstoreIndexCreator(vectorstore_kwargs={\"persist_directory\":\"./custom_save_dir_path\"}).from_loaders([loader])\n\n# Create a question-answering chain using the index\nchain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", retriever=index.vectorstore.as_retriever(), input_key=\"question\")\n\n# Pass a query to the chain\nwhile True:\n query = input(\"query: \")\n response = chain({\"question\": query})\n print(response['result'])"} +{"id": "000239", "text": "I'm using langchain to process a whole bunch of documents which are in an Mongo database.\nI can load all documents fine into the chromadb vector storage using langchain. Nothing fancy being done here. This is my code:\n\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\n\nfrom langchain.vectorstores import Chroma\ndb = Chroma.from_documents(docs, embeddings, persist_directory='db')\ndb.persist()\n\n\nNow, after storing the data, I want to get a list of all the documents and embeddings WITH id's.\nThis is so I can store them back into MongoDb.\nI also want to put them through Bertopic to get the topic categories.\nQuestion 1 is: how do I get all documents I've just stored in the Chroma database? I want the documents, and all the metadata.\nMany thanks for your help!"} +{"id": "000240", "text": "I am trying to query a stack of word documents using langchain, yet I get the following traceback.\nMay I ask what's the argument that's expected here?\nAlso, side question, is there a way to do such a query locally (without internet access and openai)?\nTraceback:\nTraceback (most recent call last):\n\n File C:\\Program Files\\Spyder\\pkgs\\spyder_kernels\\py3compat.py:356 in compat_exec\n exec(code, globals, locals)\n\n File c:\\data\\langchain\\langchaintest.py:44\n index = VectorstoreIndexCreator().from_loaders(loaders)\n\n File ~\\AppData\\Roaming\\Python\\Python38\\site-packages\\langchain\\indexes\\vectorstore.py:72 in from_loaders\n docs.extend(loader.load())\n\n File ~\\AppData\\Roaming\\Python\\Python38\\site-packages\\langchain\\document_loaders\\text.py:17 in load\n with open(self.file_path, encoding=self.encoding) as f:\n\nOSError: [Errno 22] Invalid argument:\n\n... where \"invalid argument: \" is followed by the raw text from the word document.\nCode:\nimport os\nos.environ[\"OPENAI_API_KEY\"] = \"xxxxxx\"\n\n\nimport os\nimport docx\nfrom langchain.document_loaders import TextLoader\n\n# Function to get text from a docx file\ndef get_text_from_docx(file_path):\n doc = docx.Document(file_path)\n full_text = []\n for paragraph in doc.paragraphs:\n full_text.append(paragraph.text)\n \n return '\\n'.join(full_text)\n\n# Load multiple Word documents\nfolder_path = 'C:/Data/langchain'\nword_files = [os.path.join(folder_path, file) for file in os.listdir(folder_path) if file.endswith('.docx')]\n\nloaders = []\nfor word_file in word_files:\n text = get_text_from_docx(word_file)\n loader = TextLoader(text)\n loaders.append(loader)\n \n \nfrom langchain.indexes import VectorstoreIndexCreator\n\nindex = VectorstoreIndexCreator().from_loaders(loaders)\n\nquery = \"What are the main points discussed in the documents?\"\n\nresponses = index.query(query)\nprint(responses)\n\nresults_with_source=index.query_with_sources(query)\nprint(results_with_source)"} +{"id": "000241", "text": "Getting the error while trying to run a langchain code.\nValueError: `run` not supported when there is not exactly one input key, got ['question', 'documents'].\nTraceback:\nFile \"c:\\users\\aviparna.biswas\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\streamlit\\runtime\\scriptrunner\\script_runner.py\", line 565, in _run_script\n exec(code, module.__dict__)\nFile \"D:\\Python Projects\\POC\\Radium\\Ana\\app.py\", line 49, in \n answer = question_chain.run(formatted_prompt)\nFile \"c:\\users\\aviparna.biswas\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\langchain\\chains\\base.py\", line 106, in run\n f\"`run` not supported when there is not exactly one input key, got ['question', 'documents'].\"\n\nMy code is as follows.\nimport os\nfrom apikey import apikey\n\nimport streamlit as st\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain, SequentialChain\n#from langchain.memory import ConversationBufferMemory\nfrom docx import Document\n\nos.environ['OPENAI_API_KEY'] = apikey\n\n# App framework\nst.title(' Colab Ana Answering Bot..')\nprompt = st.text_input('Plug in your question here')\n\n\n# Upload multiple documents\nuploaded_files = st.file_uploader(\"Choose your documents (docx files)\", accept_multiple_files=True, type=['docx'])\ndocument_text = \"\"\n\n# Read and combine Word documents\ndef read_docx(file):\n doc = Document(file)\n full_text = []\n for paragraph in doc.paragraphs:\n full_text.append(paragraph.text)\n return '\\n'.join(full_text)\n\nfor file in uploaded_files:\n document_text += read_docx(file) + \"\\n\\n\"\n\nwith st.expander('Contextual Prompt'):\n st.write(document_text)\n\n# Prompt template\nquestion_template = PromptTemplate(\n input_variables=['question', 'documents'],\n template='Given the following documents: {documents}. Answer the question: {question}'\n)\n\n# Llms\nllm = OpenAI(temperature=0.9)\nquestion_chain = LLMChain(llm=llm, prompt=question_template, verbose=True, output_key='answer')\n\n# Show answer if there's a prompt and documents are uploaded\nif prompt and document_text:\n formatted_prompt = question_template.format(question=prompt, documents=document_text)\n answer = question_chain.run(formatted_prompt)\n st.write(answer['answer'])\n\nI have gone through the documentations and even then I am getting the same error. I have already seen demos where multiple prompts are being taken by langchain."} +{"id": "000242", "text": "How do i add memory to RetrievalQA.from_chain_type? or, how do I add a custom prompt to ConversationalRetrievalChain?\nFor the past 2 weeks ive been trying to make a chatbot that can chat over documents (so not in just a semantic search/qa so with memory) but also with a custom prompt. I've tried every combination of all the chains and so far the closest I've gotten is ConversationalRetrievalChain, but without custom prompts, and RetrievalQA.from_chain_type but without memory"} +{"id": "000243", "text": "I am was trying to figure out a way to use StructuredTool as a multi-input Tool and used from an Agent; for example, an ZeroShotAgent.\nSadly, looks like all the agents are defined to use Tool[] instead of StructuredTool[ ] or ObjectTool[ ]. Is there any way I can do that?\nThis is my code, the final Tool (QuerySpecificFieldSupabaseTool) is an StructuredTool which is incompatible with the Agents.\nexport class SupabaseToolkit extends Toolkit {\n tools: Tool[];\n cli: SupabaseClient\n dialect = \"supabase\"\n\n constructor(cli: SupabaseClient) {\n super()\n this.cli = cli\n this.tools = [\n new QuerySupabaseTool(cli),\n new ListTablesSupabaseTool(cli),\n new ListFieldsSupabaseTool(cli),\n new QuerySpecificFieldSupabaseTool(cli),\n ]\n }\n}"} +{"id": "000244", "text": "I'm working with langchain and ChromaDb using python.\nNow, I know how to use document loaders. For instance, the below loads a bunch of documents into ChromaDb:\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings()\n\nfrom langchain.vectorstores import Chroma\ndb = Chroma.from_documents(docs, embeddings, persist_directory='db')\ndb.persist()\n\nBut what if I wanted to add a single document at a time? More specifically, I want to check if a document exists before I add it. This is so I don't keep adding duplicates.\nIf a document does not exist, only then do I want to get embeddings and add it.\nHow do I do this using langchain? I think I mostly understand langchain but have no idea how to do seemingly basic tasks like this."} +{"id": "000245", "text": "I am using Python Flask app for chat over data. In the console I am getting streamable response directly from the OpenAI since I can enable streming with a flag streaming=True.\nThe problem is, that I can't \"forward\" the stream or \"show\" the strem than in my API call.\nCode for the processing OpenAI and chain is:\ndef askQuestion(self, collection_id, question):\n collection_name = \"collection-\" + str(collection_id)\n self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=os.environ.get('OPENAI_API_KEY'), streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))\n self.memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True, output_key='answer')\n \n chroma_Vectorstore = Chroma(collection_name=collection_name, embedding_function=self.embeddingsOpenAi, client=self.chroma_client)\n\n\n self.chain = ConversationalRetrievalChain.from_llm(self.llm, chroma_Vectorstore.as_retriever(similarity_search_with_score=True),\n return_source_documents=True,verbose=VERBOSE, \n memory=self.memory)\n \n\n result = self.chain({\"question\": question})\n \n res_dict = {\n \"answer\": result[\"answer\"],\n }\n\n res_dict[\"source_documents\"] = []\n\n for source in result[\"source_documents\"]:\n res_dict[\"source_documents\"].append({\n \"page_content\": source.page_content,\n \"metadata\": source.metadata\n })\n\n return res_dict\n\nand the API route code:\n@app.route(\"/collection//ask_question\", methods=[\"POST\"])\ndef ask_question(collection_id):\n question = request.form[\"question\"]\n # response_generator = document_thread.askQuestion(collection_id, question)\n # return jsonify(response_generator)\n\n def stream(question):\n completion = document_thread.askQuestion(collection_id, question)\n for line in completion['answer']:\n yield line\n\n return app.response_class(stream_with_context(stream(question)))\n\nI am testing my endpoint with curl and I am passing flag -N to curl, so I should get the streamable response, if it is possible.\nWhen I make API call first the endpoint is waiting to process the data (I can see in my terminal in VS code the streamable answer) and when finished, I get everything displayed in one go."} +{"id": "000246", "text": "I have the following code where I am asking questions based on my context, and am able to get the respective outputs in streaming format. However, I am creating an api for the same and not able to replicate similar results\nfrom langchain import OpenAI\nfrom types import FunctionType\nfrom llama_index import ServiceContext, GPTVectorStoreIndex, LLMPredictor, PromptHelper, SimpleDirectoryReader, load_index_from_storage\nimport sys\nimport os\nimport time \nfrom llama_index.response.schema import StreamingResponse\nimport uvicorn \nfrom fastapi import FastAPI\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom pydantic import BaseModel\nimport uvicorn \n\n\n\nos.environ[\"OPENAI_API_KEY\"] = \"your key here\" # gpt 3.5 turbo\n\n\napp = FastAPI()\n\napp.add_middleware(\n CORSMiddleware,\n allow_origins=['*'],\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n)\n\nfrom llama_index import StorageContext, load_index_from_storage, ServiceContext\nfrom langchain.chat_models import ChatOpenAI\n\ndef construct_index(directory_path):\n max_input_size = 4096\n num_outputs = 5000\n max_chunk_overlap = 256\n chunk_size_limit = 3900\n file_metadata = lambda x : {\"filename\": x}\n reader = SimpleDirectoryReader(directory_path, file_metadata=file_metadata)\n \n documents = reader.load_data()\n\n prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)\n llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name=\"gpt-3.5-turbo\", max_tokens=num_outputs))\n \n service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)\n\n index = GPTVectorStoreIndex.from_documents(\n documents=documents, service_context = service_context\n )\n \n index.storage_context.persist(\"./jsons/contentstack_llm\")\n return index\n \ndef get_index():\n max_input_size = 4000\n num_outputs = 1024\n max_chunk_overlap = 512\n chunk_size_limit = 3900\n prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)\n llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name=\"gpt-3.5-turbo\", max_tokens=num_outputs, streaming = True))\n \n service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)\n \n return service_context \n\n# construct_index(\"./docs\")\nstorage_context = StorageContext.from_defaults(persist_dir=\"./jsons/contentstack_llm\")\nservice_context = get_index()\nindex = load_index_from_storage(storage_context, service_context = service_context)\n\nquery_engine = index.as_query_engine(streaming = True)\nclass Item(BaseModel):\n input_text: str\n\n@app.post(\"/question_answering\")\nasync def create_item(item: Item):\n input_sentence = item.input_text\n response = query_engine.query(input_sentence)\n links = []\n return StreamingResponse(query_engine.query(input_sentence).response_gen)\n \n\nUpon executing the following code, TypeError: cannot pickle 'generator' object, I received the following error. Is there any workaround to do it in fastapi ? I am able to stream the answers in my console, but I would like to create a stream between my api and the output. Also, if not FastAPI, can we do a similar thing in Flask ?"} +{"id": "000247", "text": "I'm trying to load 6b 128b 8bit llama based model from file (note the model itself is an example, I tested others and got similar problems), the pipeline is completely eating up my 8gb of vram:\n\n\nMy code:\nfrom langchain.llms import HuggingFacePipeline\nfrom langchain import PromptTemplate, LLMChain\n\nimport torch\nfrom transformers import LlamaTokenizer, LlamaForCausalLM, LlamaConfig, pipeline\n\ntorch.cuda.set_device(torch.device(\"cuda:0\"))\n\nPATH = './models/wizardLM-7B-GPTQ-4bit-128g'\nconfig = LlamaConfig.from_json_file(f'{PATH}/config.json')\nbase_model = LlamaForCausalLM(config=config).half()\n\ntorch.cuda.empty_cache()\ntokenizer = LlamaTokenizer.from_pretrained(\n pretrained_model_name_or_path=PATH,\n low_cpu_mem_usage=True,\n local_files_only=True\n)\ntorch.cuda.empty_cache()\n\npipe = pipeline(\n \"text-generation\",\n model=base_model,\n tokenizer=tokenizer,\n batch_size=1,\n device=0,\n max_length=100,\n temperature=0.6,\n top_p=0.95,\n repetition_penalty=1.2\n)\n\nHow can I make the pipeline initiation consume less vram?\ngpu: AMD\u00ae Radeon rx 6600 (8gb vram, rocm 5.4.2 & torch)\nI want to mention that I managed to load the same model on other frameworks like \"KoboldAI\" or \"text-generation-webui\" so I know it should be possible.\nTo load the model \"wizardLM-7B-GPTQ-4bit-128g\" downloaded from huggingface and run it using with langchain on python.\npip list output:\n Package Version\n------------------------ ----------------\naccelerate 0.19.0\naiofiles 23.1.0\naiohttp 3.8.4\naiosignal 1.3.1\naltair 5.0.0\nanyio 3.6.2\nargilla 1.7.0\nasync-timeout 4.0.2\nattrs 23.1.0\nbackoff 2.2.1\nbeautifulsoup4 4.12.2\nbitsandbytes 0.39.0\ncertifi 2022.12.7\ncffi 1.15.1\nchardet 5.1.0\ncharset-normalizer 2.1.1\nchromadb 0.3.23\nclick 8.1.3\nclickhouse-connect 0.5.24\ncmake 3.25.0\ncolorclass 2.2.2\ncommonmark 0.9.1\ncompressed-rtf 1.0.6\ncontourpy 1.0.7\ncryptography 40.0.2\ncycler 0.11.0\ndataclasses-json 0.5.7\ndatasets 2.12.0\nDeprecated 1.2.13\ndill 0.3.6\nduckdb 0.8.0\neasygui 0.98.3\nebcdic 1.1.1\net-xmlfile 1.1.0\nextract-msg 0.41.1\nfastapi 0.95.2\nffmpy 0.3.0\nfilelock 3.9.0\nfonttools 4.39.4\nfrozenlist 1.3.3\nfsspec 2023.5.0\ngradio 3.28.3\ngradio_client 0.2.5\ngreenlet 2.0.2\nh11 0.14.0\nhnswlib 0.7.0\nhttpcore 0.16.3\nhttptools 0.5.0\nhttpx 0.23.3\nhuggingface-hub 0.14.1\nidna 3.4\nIMAPClient 2.3.1\nJinja2 3.1.2\njoblib 1.2.0\njsonschema 4.17.3\nkiwisolver 1.4.4\nlangchain 0.0.171\nlark-parser 0.12.0\nlinkify-it-py 2.0.2\nlit 15.0.7\nllama-cpp-python 0.1.50\nloralib 0.1.1\nlxml 4.9.2\nlz4 4.3.2\nMarkdown 3.4.3\nmarkdown-it-py 2.2.0\nMarkupSafe 2.1.2\nmarshmallow 3.19.0\nmarshmallow-enum 1.5.1\nmatplotlib 3.7.1\nmdit-py-plugins 0.3.3\nmdurl 0.1.2\nmonotonic 1.6\nmpmath 1.2.1\nmsg-parser 1.2.0\nmsoffcrypto-tool 5.0.1\nmultidict 6.0.4\nmultiprocess 0.70.14\nmypy-extensions 1.0.0\nnetworkx 3.0\nnltk 3.8.1\nnumexpr 2.8.4\nnumpy 1.24.1\nnvidia-cublas-cu11 11.10.3.66\nnvidia-cuda-cupti-cu11 11.7.101\nnvidia-cuda-nvrtc-cu11 11.7.99\nnvidia-cuda-runtime-cu11 11.7.99\nnvidia-cudnn-cu11 8.5.0.96\nnvidia-cufft-cu11 10.9.0.58\nnvidia-curand-cu11 10.2.10.91\nnvidia-cusolver-cu11 11.4.0.1\nnvidia-cusparse-cu11 11.7.4.91\nnvidia-nccl-cu11 2.14.3\nnvidia-nvtx-cu11 11.7.91\nolefile 0.46\noletools 0.60.1\nopenai 0.27.7\nopenapi-schema-pydantic 1.2.4\nopenpyxl 3.1.2\norjson 3.8.12\npackaging 23.1\npandas 1.5.3\npandoc 2.3\npcodedmp 1.2.6\npdfminer.six 20221105\nPillow 9.3.0\npip 23.0.1\nplumbum 1.8.1\nply 3.11\nposthog 3.0.1\npsutil 5.9.5\npyarrow 12.0.0\npycparser 2.21\npydantic 1.10.7\npydub 0.25.1\nPygments 2.15.1\npygpt4all 1.1.0\npygptj 2.0.3\npyllamacpp 2.3.0\npypandoc 1.11\npyparsing 2.4.7\npyrsistent 0.19.3\npython-dateutil 2.8.2\npython-docx 0.8.11\npython-dotenv 1.0.0\npython-magic 0.4.27\npython-multipart 0.0.6\npython-pptx 0.6.21\npytorch-triton-rocm 2.0.1\npytz 2023.3\npytz-deprecation-shim 0.1.0.post0\nPyYAML 6.0\nred-black-tree-mod 1.20\nregex 2023.5.5\nrequests 2.28.1\nresponses 0.18.0\nrfc3986 1.5.0\nrich 13.0.1\nRTFDE 0.0.2\nscikit-learn 1.2.2\nscipy 1.10.1\nsemantic-version 2.10.0\nsentence-transformers 2.2.2\nsentencepiece 0.1.99\nsetuptools 66.0.0\nsix 1.16.0\nsniffio 1.3.0\nsoupsieve 2.4.1\nSQLAlchemy 2.0.15\nstarlette 0.27.0\nsympy 1.11.1\ntabulate 0.9.0\ntenacity 8.2.2\nthreadpoolctl 3.1.0\ntokenizers 0.13.3\ntoolz 0.12.0\ntorch 2.0.1+rocm5.4.2\ntorchaudio 2.0.2+rocm5.4.2\ntorchvision 0.15.2+rocm5.4.2\ntqdm 4.65.0\ntransformers 4.30.0.dev0\ntriton 2.0.0\ntyper 0.9.0\ntyping_extensions 4.4.0\ntyping-inspect 0.8.0\ntzdata 2023.3\ntzlocal 4.2\nuc-micro-py 1.0.2\nunstructured 0.6.6\nurllib3 1.26.13\nuvicorn 0.22.0\nuvloop 0.17.0\nwatchfiles 0.19.0\nwebsockets 11.0.3\nwheel 0.38.4\nwikipedia 1.4.0\nwrapt 1.14.1\nXlsxWriter 3.1.0\nxxhash 3.2.0\nyarl 1.9.2\nzstandard 0.21.0"} +{"id": "000248", "text": "I'm learning Langchain and vector databases.\nFollowing the original documentation I can read some docs, update the database and then make a query.\nhttps://python.langchain.com/en/harrison-docs-refactor-3-24/modules/indexes/vectorstores/examples/pinecone.html\nI want to access the same index and query it again, but without re-loading the embeddings and adding the vectors again to the ddbb.\nHow can I generate the same docsearch object without creating new vectors?\n# Load source Word doc\nloader = UnstructuredWordDocumentLoader(\"C:/Users/ELECTROPC/utilities/openai/data_test.docx\", mode=\"elements\")\ndata = loader.load()\n\n# Text splitting\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ntexts = text_splitter.split_documents(data)\n\n# Upsert vectors to Pinecone Index\npinecone.init(\n api_key=PINECONE_API_KEY, # find at app.pinecone.io\n environment=PINECONE_API_ENV\n)\nindex_name = \"mlqai\"\nembeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY'])\n\ndocsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)\n\n\n# Query\nllm = OpenAI(temperature=0, openai_api_key=os.environ['OPENAI_API_KEY'])\nchain = load_qa_chain(llm, chain_type=\"stuff\")\n\nquery = \"que sabes de los patinetes?\"\ndocs = docsearch.similarity_search(query)\nanswer = chain.run(input_documents=docs, question=query)\nprint(answer)"} +{"id": "000249", "text": "I am doing a microservice with a document loader, and the app can't launch at the import level, when trying to import langchain's UnstructuredMarkdownLoader\n$ flask --app main run --debug\nTraceback (most recent call last):\n File \"venv/bin/flask\", line 8, in \n sys.exit(main())\n File \"venv/lib/python3.9/site-packages/flask/cli.py\", line 1063, in main\n cli.main()\n File \"venv/lib/python3.9/site-packages/click/core.py\", line 1055, in main\n rv = self.invoke(ctx)\n File \"venv/lib/python3.9/site-packages/click/core.py\", line 1657, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File \"venv/lib/python3.9/site-packages/click/core.py\", line 1404, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File \"venv/lib/python3.9/site-packages/click/core.py\", line 760, in invoke\n return __callback(*args, **kwargs)\n File \"venv/lib/python3.9/site-packages/click/decorators.py\", line 84, in new_func\n return ctx.invoke(f, obj, *args, **kwargs)\n File \"venv/lib/python3.9/site-packages/click/core.py\", line 760, in invoke\n return __callback(*args, **kwargs)\n File \"venv/lib/python3.9/site-packages/flask/cli.py\", line 911, in run_command\n raise e from None\n File \"venv/lib/python3.9/site-packages/flask/cli.py\", line 897, in run_command\n app = info.load_app()\n File \"venv/lib/python3.9/site-packages/flask/cli.py\", line 308, in load_app\n app = locate_app(import_name, name)\n File \"venv/lib/python3.9/site-packages/flask/cli.py\", line 218, in locate_app\n __import__(module_name)\n File \"main.py\", line 5, in \n from lc_indexer import index_documents\n File \"lc_indexer.py\", line 5, in \n from langchain.document_loaders import UnstructuredMarkdownLoader\n File \"venv/lib/python3.9/site-packages/langchain/__init__.py\", line 6, in \n from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain\n File \"venv/lib/python3.9/site-packages/langchain/agents/__init__.py\", line 2, in \n from langchain.agents.agent import (\n File \"venv/lib/python3.9/site-packages/langchain/agents/agent.py\", line 16, in \n from langchain.agents.tools import InvalidTool\n File \"venv/lib/python3.9/site-packages/langchain/agents/tools.py\", line 8, in \n from langchain.tools.base import BaseTool, Tool, tool\n File \"venv/lib/python3.9/site-packages/langchain/tools/__init__.py\", line 42, in \n from langchain.tools.vectorstore.tool import (\n File \"venv/lib/python3.9/site-packages/langchain/tools/vectorstore/tool.py\", line 13, in \n from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain\n File \"venv/lib/python3.9/site-packages/langchain/chains/__init__.py\", line 2, in \n from langchain.chains.api.base import APIChain\n File \"venv/lib/python3.9/site-packages/langchain/chains/api/base.py\", line 13, in \n from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT\n File \"venv/lib/python3.9/site-packages/langchain/chains/api/prompt.py\", line 2, in \n from langchain.prompts.prompt import PromptTemplate\n File \"venv/lib/python3.9/site-packages/langchain/prompts/__init__.py\", line 3, in \n from langchain.prompts.chat import (\n File \"venv/lib/python3.9/site-packages/langchain/prompts/chat.py\", line 10, in \n from langchain.memory.buffer import get_buffer_string\n File \"venv/lib/python3.9/site-packages/langchain/memory/__init__.py\", line 28, in \n from langchain.memory.vectorstore import VectorStoreRetrieverMemory\n File \"venv/lib/python3.9/site-packages/langchain/memory/vectorstore.py\", line 10, in \n from langchain.vectorstores.base import VectorStoreRetriever\n File \"venv/lib/python3.9/site-packages/langchain/vectorstores/__init__.py\", line 2, in \n from langchain.vectorstores.analyticdb import AnalyticDB\n File \"venv/lib/python3.9/site-packages/langchain/vectorstores/analyticdb.py\", line 16, in \n from langchain.embeddings.base import Embeddings\n File \"venv/lib/python3.9/site-packages/langchain/embeddings/__init__.py\", line 19, in \n from langchain.embeddings.openai import OpenAIEmbeddings\n File \"venv/lib/python3.9/site-packages/langchain/embeddings/openai.py\", line 67, in \n class OpenAIEmbeddings(BaseModel, Embeddings):\n File \"pydantic/main.py\", line 197, in pydantic.main.ModelMetaclass.__new__\n File \"pydantic/fields.py\", line 506, in pydantic.fields.ModelField.infer\n File \"pydantic/fields.py\", line 436, in pydantic.fields.ModelField.__init__\n File \"pydantic/fields.py\", line 552, in pydantic.fields.ModelField.prepare\n File \"pydantic/fields.py\", line 663, in pydantic.fields.ModelField._type_analysis\n File \"pydantic/fields.py\", line 808, in pydantic.fields.ModelField._create_sub_type\n File \"pydantic/fields.py\", line 436, in pydantic.fields.ModelField.__init__\n File \"pydantic/fields.py\", line 552, in pydantic.fields.ModelField.prepare\n File \"pydantic/fields.py\", line 668, in pydantic.fields.ModelField._type_analysis\n File \"/home/my_username/.pyenv/versions/3.9.16/lib/python3.9/typing.py\", line 852, in __subclasscheck__\n return issubclass(cls, self.__origin__)\nTypeError: issubclass() arg 1 must be a class\n\nHere is the content of lc_indexer.py where the langchain imports occur\n# INDEX DOCUMENTS\nimport os\nfrom os.path import join, isfile\n\nfrom langchain.document_loaders import UnstructuredMarkdownLoader\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.text_splitter import TokenTextSplitter, CharacterTextSplitter\nfrom langchain.vectorstores import Chroma\n\n\ndef index_documents(source_directories: list[str], persist_directory: str, chunk_size: int = 1000,\n chunk_overlap: int = 15):\n \"\"\"\n Indexe les documents venant des r\u00e9pertoires fournis\n\n :param source_directories: list[str]\n :param persist_directory: str\n :param chunk_size: int = 1000\n :param chunk_overlap: int = 15\n :return:\n \"\"\"\n\n only_files = []\n for directory in source_directories:\n my_path = f'{directory}'\n for f in os.listdir(my_path):\n if isfile(join(my_path, f)):\n only_files.append(f'{my_path}/{f}')\n\n embeddings = OpenAIEmbeddings()\n for file in only_files:\n index_file_to_chroma(file, persist_directory, embeddings, chunk_size, chunk_overlap)\n\n\ndef index_file_to_chroma(file: str, persist_directory: str, embeddings: OpenAIEmbeddings, chunk_size: int, chunk_overlap: int):\n \"\"\"\n Indexe un document dans Chroma\n\n :param embeddings: OpenAIEmbeddings\n :param file: str\n :param persist_directory: str\n :param chunk_size: int\n :param chunk_overlap: int\n :return:\n \"\"\"\n\n loader = UnstructuredMarkdownLoader(file_path=file, encoding='utf8')\n docs = loader.load()\n text_splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=0)\n pages = text_splitter.split_documents(docs)\n text_splitter = TokenTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)\n texts = text_splitter.split_documents(pages)\n db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)\n db.persist()\n print(f'Indexed file {file} for module {persist_directory}')\n db = None\n# /INDEX DOCUMENTS\n\nThis file has been copied from a test project where no such error occurs at all when trying it but it was tested from the CLI so it may change something here.\nAlready tried copying those functions and the imports into the main.py file, but I get the same error.\nI have tried commenting the import of lc_indexer.py and the call to the index_documents function in the main.py, and it launches no problem.\nWhat is the root of the problem here? Langchain requirements have been installed"} +{"id": "000250", "text": "I follow a YouTube LangChain tutorial where it teaches Create Your Own ChatGPT with PDF Data in 5 Minutes (LangChain Tutorial) and here is the colab notebook link provided by the author for his work below the video description. I didn't modify a lot of his codes where I just changed the OpenAPI key with my one (not free plan).\n\n\nCan I know why I got this error as shown in the diagram above when I try to run the code in the cell?\nI expect the FAISS vector database can be created."} +{"id": "000251", "text": "I am trying to build a docker image for my python flask project.\nSeems like there is some issue with the below packages on which Chromadb build is dependent\n\nduckdb,\nhnswlib\n\nBelow are the contents of the docker file.\nFROM python:3.10-slim-buster\nENV HNSWLIB_NO_NATIVE=1\nRUN mkdir /app\nWORKDIR /app\nCOPY . /app\n\nRUN pip install --upgrade pip setuptools\n\nRUN pip install -r requirements.txt\n\n\n\nRUN export HNSWLIB_NO_NATIVE=1\n\nRUN pip install chromadb\n\nEXPOSE 5000\n\nCMD python ./app.py\n\nThe docker build fails at \"RUN pip install chromadb\" with the below error pointing out \"Could not build wheels for duckdb, hnswlib\"-\nBuilding wheel for hnswlib (pyproject.toml) did not run successfully.\n#0 9.427 \u2502 exit code: 1\n#0 9.427 \u2570\u2500> [55 lines of output]\n#0 9.427 running bdist_wheel\n#0 9.427 running build\n#0 9.427 running build_ext\n#0 9.427 creating tmp\n#0 9.427 gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/usr/local/include/python3.10 -c /tmp/tmpohs_vaib.cpp -o tmp/tmpohs_vaib.o -std=c++14\n#0 9.427 gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/usr/local/include/python3.10 -c /tmp/tmp1os2pqqf.cpp -o tmp/tmp1os2pqqf.o -std=c++11\n#0 9.427 Traceback (most recent call last):\n#0 9.427 File \"/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py\", line 353, in \n#0 9.427 main()\n#0 9.427 File \"/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py\", line 335, in main\n#0 9.427 json_out['return_val'] = hook(**hook_input['kwargs'])\n#0 9.427 File \"/usr/local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py\", line 251, in build_wheel\n#0 9.427 return _build_backend().build_wheel(wheel_directory, config_settings,\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py\", line 416, in build_wheel\n#0 9.427 return self._build_with_temp_dir(['bdist_wheel'], '.whl',\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py\", line 401, in _build_with_temp_dir\n#0 9.427 self.run_setup()\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/build_meta.py\", line 338, in run_setup\n#0 9.427 exec(code, locals())\n#0 9.427 File \"\", line 116, in \n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/__init__.py\", line 107, in setup\n#0 9.427 return distutils.core.setup(**attrs)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py\", line 185, in setup\n#0 9.427 return run_commands(dist)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/core.py\", line 201, in run_commands\n#0 9.427 dist.run_commands()\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py\", line 969, in run_commands\n#0 9.427 self.run_command(cmd)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/dist.py\", line 1244, in run_command\n#0 9.427 super().run_command(command)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py\", line 988, in run_command\n#0 9.427 cmd_obj.run()\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/wheel/bdist_wheel.py\", line 343, in run\n#0 9.427 self.run_command(\"build\")\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py\", line 318, in run_command\n#0 9.427 self.distribution.run_command(command)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/dist.py\", line 1244, in run_command\n#0 9.427 super().run_command(command)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py\", line 988, in run_command\n#0 9.427 cmd_obj.run()\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build.py\", line 131, in run\n#0 9.427 self.run_command(cmd_name)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/cmd.py\", line 318, in run_command\n#0 9.427 self.distribution.run_command(command)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/dist.py\", line 1244, in run_command\n#0 9.427 super().run_command(command)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/dist.py\", line 988, in run_command\n#0 9.427 cmd_obj.run()\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/command/build_ext.py\", line 84, in run\n#0 9.427 _build_ext.run(self)\n#0 9.427 File \"/tmp/pip-build-env-o2mbvvt6/overlay/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py\", line 345, in run\n#0 9.427 self.build_extensions()\n#0 9.427 File \"\", line 103, in build_extensions\n#0 9.427 File \"\", line 70, in cpp_flag\n#0 9.427 RuntimeError: Unsupported compiler -- at least C++11 support is needed!\n#0 9.427 [end of output]\n#0 9.427 \n#0 9.427 note: This error originates from a subprocess, and is likely not a problem with pip.\n#0 9.427 ERROR: Failed building wheel for hnswlib\n#0 9.428 Failed to build duckdb hnswlib\n#0 9.428 ERROR: Could not build wheels for duckdb, hnswlib, which is required to install pyproject.toml-based projects\n------\nDockerfile:15\n--------------------\n 13 | RUN export HNSWLIB_NO_NATIVE=1\n 14 | \n 15 | >>> RUN pip install chromadb\n 16 | \n 17 | EXPOSE 5000\n--------------------\nERROR: failed to solve: process \"/bin/sh -c pip install chromadb\" did not complete successfully: exit code: 1\n\nCould someone help please?"} +{"id": "000252", "text": "Im using a conversational agent, with some tools, one of them is a calculator tool (for the sake of example).\nAgent initializated as follows:\nconversational_agent = initialize_agent(\n agent='chat-conversational-react-description',\n tools=[CalculatorTool()],\n llm=llm_gpt4,\n verbose=True,\n max_iterations=2,\n early_stopping_method=\"generate\",\n memory=memory,\n # agent_kwargs=dict(output_parser=output_parser),\n )\n\n\nWhen the CalculatorTool is being activated, it will return a string output, the agent takes that output and process it further to get to the \"Final Answer\" thus changing the formatting of the output from the CalculatorTool\nFor example, for input 10*10, the tool run() function will return 100, which will be propagated back to the agent, that will call self._take_next_step() and continue processing the output.\nIt will create a final output similar the result of your prompt of 10x10 is 100\nI dont want the added formatting by the LLM, just the output of 100.\nI want to break the chain when the CalculatorTool is done, and have it's output returned to the client as is.\nI also have have tools that return serialized data, for a graph chart, having that data re-processed by next iterations of the agent will make it invalid."} +{"id": "000253", "text": "I'm using langchain library to save the information of my company in a Vector Database, and when I query for information the results are great, but need a way to recover where the information are comming too - like source: \"www.site.com/about\" or at least \"document 156\". Do any of you know how to do that?\nEDIT: Currently, I'm using docsearch.similarity_search(query), what only return me the page_content, but metadata came empty\nI'm ingesting with this code, but I'm totally open to change.\ndb = ElasticVectorSearch.from_documents(\n documents,\n embeddings,\n elasticsearch_url=\"http://localhost:9200\",\n index_name=\"elastic-index\",\n )"} +{"id": "000254", "text": "I can see everything but the Embedding of the documents when I used Chroma with Langchain and OpenAI embeddings. It always show me None for that\nHere is the code:\nfor db_collection_name in tqdm([\"class1-sub2-chap3\", \"class2-sub3-chap4\"]):\n documents = []\n doc_ids = []\n\n for doc_index in range(3):\n cl, sub, chap = db_collection_name.split(\"-\")\n content = f\"This is {db_collection_name}-doc{doc_index}\"\n doc = Document(page_content=content, metadata={\"chunk_num\": doc_index, \"chapter\":chap, \"class\":cl, \"subject\":sub})\n documents.append(doc)\n doc_ids.append(str(doc_index))\n\n\n # # Initialize a Chroma instance with the original document\n db = Chroma.from_documents(\n collection_name=db_collection_name,\n documents=documents, ids=doc_ids,\n embedding=embeddings, \n persist_directory=\"./data\")\n \n db.persist()\n\nwhen I do db.get(), I see everything as expected except embedding is None.\n{'ids': ['0', '1', '2'],\n 'embeddings': None,\n 'documents': ['This is class1-sub2-chap3-doc0',\n 'This is class1-sub2-chap3-doc1',\n 'This is class1-sub2-chap3-doc2'],\n 'metadatas': [{'chunk_num': 0,\n 'chapter': 'chap3',\n 'class': 'class1',\n 'subject': 'sub2'},\n {'chunk_num': 1, 'chapter': 'chap3', 'class': 'class1', 'subject': 'sub2'},\n {'chunk_num': 2, 'chapter': 'chap3', 'class': 'class1', 'subject': 'sub2'}]}\n\nMy embeddings is also working fine as it returns:\nlen(embeddings.embed_documents([\"EMBED THIS\"])[0])\n>> 1536\n\nalso, in my ./data directory I have Embedding file as chroma-embeddings.parquet\n\nI tried the example with example given in document but it shows None too\n# Import Document class\nfrom langchain.docstore.document import Document\n\n# Initial document content and id\ninitial_content = \"This is an initial document content\"\ndocument_id = \"doc1\"\n\n# Create an instance of Document with initial content and metadata\noriginal_doc = Document(page_content=initial_content, metadata={\"page\": \"0\"})\n\n# Initialize a Chroma instance with the original document\nnew_db = Chroma.from_documents(\n collection_name=\"test_collection\",\n documents=[original_doc],\n embedding=OpenAIEmbeddings(), # using the same embeddings as before\n ids=[document_id],\n)\n\nHere also new_db.get() gives me None"} +{"id": "000255", "text": "from langchain.schema import BaseMemory\n\nclass ChatMemory(BaseMemory):\n def __init__(self, user_id: UUID, type: str):\n self.user_id = user_id\n self.type = type\n\n # implemented abstract methods\n\nclass AnotherMem(ChatMemory):\n def __init__(self, user_id: UUID, type: str):\n super().__init__(user_id, type)\n\nThis seems simple enough - but I get an error: ValueError: \"AnotherMem\" object has no field \"user_id\". What am I doing wrong?\nNote that BaseMemory is an interface."} +{"id": "000256", "text": "It's not possible to pass long documents to ChatGPT directly due to its limited context size. So for example question answering or summarization of long documents is not possible at first sight. I've learned how ChatGPT can in principle \"know\" larger contexts -- basically by summarizing a sequence of previous contexts from the chat history -- but will this suffice to detect really long-range dependencies (bearing \"meaning\") inside really long texts?\nLangChain seems to offer an solution, making use of OpenAI's API and vectorstores. I'm looking for a high-level description what's going on when LangChain makes accessible long documents or even corpora of long documents to ChatGPT and then makes use of ChatGPT's NLP abilities by clever automated prompting, e.g. question answering or summarization. Let's assume that the documents are already formatted as LangChain Document objects."} +{"id": "000257", "text": "I am working with LangChain for the first time. Due to data security, I want to be sure about the storage of langchain's vector store storage. I am using HNSWLib vector store, which mentions it is an in-memory store. What does it mean? Does Langchain/vector stores store any data in its servers?\nhttps://js.langchain.com/docs/modules/indexes/vector_stores/integrations/hnswlib\nhttps://github.com/nmslib/hnswlib"} +{"id": "000258", "text": "I tried executing a langchain agent. I want to save the output from verbose into a variable, but all I can access from the agent.run is only the final answer.\nHow can I save the verbose output to a variable so that I can use later?\nMy code:\nimport json\nfrom langchain.agents import load_tools\nfrom langchain.agents import initialize_agent\nfrom langchain.agents import AgentType\nfrom langchain.llms import OpenAI\nfrom langchain.agents import Tool\nfrom langchain.utilities import PythonREPL\n\nllm = OpenAI(temperature=0.1)\n\n## Define Tools\npython_repl = PythonREPL()\n\ntools = load_tools([\"python_repl\", \"llm-math\"], llm=llm)\n\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n\nresponse = agent.run(\"What is 3^2. Use calculator to solve.\")\n\nI tried accessing the response from the agent, but it's only the final answer instead of the verbose output.\nprinting response gives only 9. But I would like the verbose process like:\n> Entering new AgentExecutor chain...\n I need to use the calculator to solve this.\nAction: Calculator\nAction Input: 3^2\nObservation: Answer: 9\nThought: I now know the final answer.\nFinal Answer: 9"} +{"id": "000259", "text": "Whats the recommended way to define an output schema for a nested json, the method I use doesn't feel ideal.\n# adding to planner -> from langchain.experimental.plan_and_execute import load_chat_planner\n\nrefinement_response_schemas = [\n ResponseSchema(name=\"plan\", description=\"\"\"{'1': {'step': '','tools': [],'data_sources': [],'sub_steps_needed': bool},\n '2': {'step': '','tools': [],'data_sources': [<>], 'sub_steps_needed': bool},}\"\"\"),] #define json schema in description, works but doesn't feel proper\n \nrefinement_output_parser = StructuredOutputParser.from_response_schemas(refinement_response_schemas)\nrefinement_format_instructions = refinement_output_parser.get_format_instructions()\n\nrefinement_output_parser.parse(output)\n\ngives:\n{'plan': {'1': {'step': 'Identify the top 5 strikers in La Liga',\n 'tools': [],\n 'data_sources': ['sports websites', 'official league statistics'],\n 'sub_steps_needed': False},\n '2': {'step': 'Identify the top 5 strikers in the Premier League',\n 'tools': [],\n 'data_sources': ['sports websites', 'official league statistics'],\n 'sub_steps_needed': False},\n ...\n '6': {'step': 'Given the above steps taken, please respond to the users original question',\n 'tools': [],\n 'data_sources': [],\n 'sub_steps_needed': False}}}\n\nit works but I want to know if theres a better way to go about this."} +{"id": "000260", "text": "I have a simple Langchain chatbot using GPT4ALL that's being run in a singleton class within my Django server.\nHere's the simple code:\ngpt4all_path = './models/gpt4all_converted.bin'\nllama_path = './models/ggml_model_q4_0.bin'\n\nembeddings = LlamaCppEmbeddings(model_path=llama_path)\n\nprint(\"Initializing Index...\")\nvectordb = FAISS.from_documents(docs, embeddings)\nprint(\"Initialzied Index!!!\")\n\nThis code runs fine when used inside the manage.py shell separately but the class instantiation fails to create a FAISS index with the same code. It keeps printing the llama_print_timings 43000ms with the ms increasing on every print message.\nCan someone help me out?"} +{"id": "000261", "text": "I use the following line to add langchain documents to a chroma database: Chroma.from_documents(docs, embeddings, ids=ids, persist_directory='db')\nwhen ids are duplicates, I get this error: chromadb.errors.IDAlreadyExistsError\nhow do I catch the error? (duplicate ids are expected - I expect Chorma to not add them)\nI've tried identifying the error in langchain documentation. Not sure how to catch it."} +{"id": "000262", "text": "I'm trying to create a Qdrant vectorsore and add my documents.\n\nMy embeddings are based on OpenAIEmbeddings\nthe QdrantClient is local for my case\nthe collection that I'm creating has the\nVectorParams as such: VectorParams(size=2000, distance=Distance.EUCLID)\n\nI'm getting the following error:\nValueError: could not broadcast input array from shape (1536,) into shape (2000,)\nI understand that my error is how I configure the vectorParams, but I don't undertsand how these values need to be calculated.\nhere's my complete code:\nimport os\nfrom typing import List\n\nfrom langchain.docstore.document import Document\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain.vectorstores import Qdrant, VectorStore\nfrom qdrant_client import QdrantClient\nfrom qdrant_client.models import Distance, VectorParams\n\ndef load_documents(documents: List[Document]) -> VectorStore:\n \"\"\"Create a vectorstore from documents.\"\"\"\n collection_name = \"my_collection\"\n vectorstore_path = \"data/vectorstore/qdrant\"\n embeddings = OpenAIEmbeddings(\n model=\"text-embedding-ada-002\",\n openai_api_key=os.getenv(\"OPENAI_API_KEY\"),\n )\n qdrantClient = QdrantClient(path=vectorstore_path, prefer_grpc=True)\n qdrantClient.create_collection(\n collection_name=collection_name,\n vectors_config=VectorParams(size=2000, distance=Distance.EUCLID),\n )\n vectorstore = Qdrant(\n client=qdrantClient,\n collection_name=collection_name,\n embeddings=embeddings,\n )\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=1000,\n chunk_overlap=200,\n )\n\n sub_docs = text_splitter.split_documents(documents)\n vectorstore.add_documents(sub_docs)\n\n return vectorstore\n\nAny ideas on how I should configure the vector params properly?"} +{"id": "000263", "text": "I am using the SQL Database Agent to query a postgres database. I want to use gpt 4 or gpt 3.5 models in the OpenAI llm passed to the agent, but it says I must use ChatOpenAI. Using ChatOpenAI throws parsing errors.\nThe reason for wanting to switch models is reduced cost, better performance and most importantly - token limit. The max token size is 4k for 'text-davinci-003' and I need at least double that.\nHere is my code\nfrom langchain.agents.agent_toolkits import SQLDatabaseToolkit\nfrom langchain.sql_database import SQLDatabase\nfrom langchain.agents import create_sql_agent\nfrom langchain.llms import OpenAI\nfrom langchain.chat_models import ChatOpenAI\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"\"\ndb = SQLDatabase.from_uri(\n \"postgresql://\",\n engine_args={\n \"connect_args\": {\"sslmode\": \"require\"},\n },\n)\n\nllm = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\ntoolkit = SQLDatabaseToolkit(db=db, llm=llm)\n\nagent_executor = create_sql_agent(\n llm=llm,\n toolkit=toolkit,\n verbose=True,\n)\n\nagent_executor.run(\"list the tables in the db. Give the answer in a table json format.\")\n\nWhen I do, it throws an error in the chain midway saying\n> Entering new AgentExecutor chain...\nTraceback (most recent call last):\n File \"/home/ramlah/Documents/projects/langchain-test/sql.py\", line 96, in \n agent_executor.run(\"list the tables in the db. Give the answer in a table json format.\")\n File \"/home/ramlah/Documents/projects/langchain/langchain/chains/base.py\", line 236, in run\n return self(args[0], callbacks=callbacks)[self.output_keys[0]]\n File \"/home/ramlah/Documents/projects/langchain/langchain/chains/base.py\", line 140, in __call__\n raise e\n File \"/home/ramlah/Documents/projects/langchain/langchain/chains/base.py\", line 134, in __call__\n self._call(inputs, run_manager=run_manager)\n File \"/home/ramlah/Documents/projects/langchain/langchain/agents/agent.py\", line 953, in _call\n next_step_output = self._take_next_step(\n File \"/home/ramlah/Documents/projects/langchain/langchain/agents/agent.py\", line 773, in _take_next_step\n raise e\n File \"/home/ramlah/Documents/projects/langchain/langchain/agents/agent.py\", line 762, in _take_next_step\n output = self.agent.plan(\n File \"/home/ramlah/Documents/projects/langchain/langchain/agents/agent.py\", line 444, in plan\n return self.output_parser.parse(full_output)\n File \"/home/ramlah/Documents/projects/langchain/langchain/agents/mrkl/output_parser.py\", line 51, in parse\n raise OutputParserException(\nlangchain.schema.OutputParserException: Could not parse LLM output: `Action: list_tables_sql_db, ''`\n\nPlease help. Thanks!\nUpdate\nThe recent updates to langchain version 0.0.215 seem to have fixed this issue, for me at least."} +{"id": "000264", "text": "I am trying to ask questions against a multiple pdf using pinecone and openAI but I dont know how to.\nThe code below works for asking questions against one document. but I would like to have multiple documents to ask questions against:\n\n# process_message.py\nfrom flask import request\nimport pinecone\n# from PyPDF2 import PdfReader\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import ElasticVectorSearch, Pinecone, Weaviate, FAISS\nfrom langchain.chains.question_answering import load_qa_chain\nfrom langchain.llms import OpenAI\nimport os\nimport json\n# from constants.company import file_company_id_column, file_location_column, file_name_column\nfrom services.files import FileFireStorage\nfrom middleware.auth import check_authorization\nimport configparser\nfrom langchain.document_loaders import UnstructuredPDFLoader, OnlinePDFLoader, PyPDFLoader\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n\n\ndef process_message():\n \n # Create a ConfigParser object and read the config.ini file\n config = configparser.ConfigParser()\n config.read('config.ini')\n # Retrieve the value of OPENAI_API_KEY\n openai_key = config.get('openai', 'OPENAI_API_KEY')\n pinecone_env_key = config.get('pinecone', 'PINECONE_ENVIRONMENT')\n pinecone_api_key = config.get('pinecone', 'PINECONE_API_KEY')\n\n\n loader = PyPDFLoader(\"docs/ops.pdf\")\n data = loader.load()\n # data = body['data'][1]['name']\n # Print information about the loaded data\n print(f\"You have {len(data)} document(s) in your data\")\n print(f\"There are {len(data[30].page_content)} characters in your document\")\n\n # Chunk your data up into smaller documents\n text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n texts = text_splitter.split_documents(data)\n \n\n embeddings = OpenAIEmbeddings(openai_api_key=openai_key)\n\n pinecone.init(api_key=pinecone_api_key, environment=pinecone_env_key)\n index_name = \"pdf-chatbot\" # Put in the name of your Pinecone index here\n\n docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)\n # Query those docs to get your answer back\n llm = OpenAI(temperature=0, openai_api_key=openai_key)\n chain = load_qa_chain(llm, chain_type=\"stuff\")\n\n query = \"Are there any other documents listed in this document?\"\n docs = docsearch.similarity_search(query)\n answer = chain.run(input_documents=docs, question=query)\n print(answer)\n\n return answer\n\nI added as many comments as I could there.\nI got this information from https://www.youtube.com/watch?v=h0DHDp1FbmQ\nI tried to look at other stackoverflow questions about this but could not find anything similar"} +{"id": "000265", "text": "finetuned a model (https://huggingface.co/decapoda-research/llama-7b-hf) using peft and lora and saved as https://huggingface.co/lucas0/empath-llama-7b. Now im getting Pipeline cannot infer suitable model classes from when trying to use it along with with langchain and chroma vectordb:\nfrom langchain.embeddings import HuggingFaceHubEmbeddings\nfrom langchain import PromptTemplate, HuggingFaceHub, LLMChain\nfrom langchain.chains import RetrievalQA\nfrom langchain.prompts import PromptTemplate\nfrom langchain.vectorstores import Chroma\n\nrepo_id = \"sentence-transformers/all-mpnet-base-v2\"\nembedder = HuggingFaceHubEmbeddings(\n repo_id=repo_id,\n task=\"feature-extraction\",\n huggingfacehub_api_token=\"XXXXX\",\n)\ncomments = [\"foo\", \"bar\"]\nembeddings = embedder.embed_documents(texts=comments)\ndocsearch = Chroma.from_texts(comments, embedder).as_retriever()\n#docsearch = Chroma.from_documents(texts, embeddings)\n\nllm = HuggingFaceHub(repo_id='lucas0/empath-llama-7b', huggingfacehub_api_token='XXXXX')\nqa = RetrievalQA.from_chain_type(llm=llm, chain_type=\"stuff\", retriever=docsearch, return_source_documents=False)\n\nq = input(\"input your query:\")\nresult = qa.run(query=q)\n\nprint(result[\"result\"])\n\n\nis anyone able to tell me how to fix this? Is it an issue with the model card? I was facing issues with the lack of the config.json file and ended up just placing the same config.json as the model I used as base for the lora fine-tuning. Could that be the origin of the issue? If so, how to generate the correct config.json without having to get the original llama weights?\nAlso, is there a way of loading several sentences into a custom HF model (not only OpenAi, as the tutorial show) without using vector dbs?\nThanks!\n\nThe same issue happens when trying to run the API on the model's HF page:"} +{"id": "000266", "text": "I am trying to understand GPT/langchain . I want to use my own data only but I am not able to find a basic example.\nfor example, I envision my chat to be something like this:\nUSER: show me way to build a tree house\nGPT : To build a tree house you need the following materials and tools.....\nMY owns data in a file mydata.txt with the following content\nTo build a tree house you need the following tool hammer , nails and materials wood...\n....\n.....\n\nCan you please show a simple example of how this can be done .."} +{"id": "000267", "text": "I am using LangChain to create embeddings and then ask a question to those embeddings like so:\nembeddings: OpenAIEmbeddings = OpenAIEmbeddings(disallowed_special=())\ndb = DeepLake(\n dataset_path=deeplake_url,\n read_only=True,\n embedding_function=embeddings,\n)\nretriever: VectorStoreRetriever = db.as_retriever()\nmodel = ChatOpenAI(model_name=\"gpt-3.5-turbo\") \nqa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)\nresult = qa({\"question\": question, \"chat_history\": chat_history})\n\nBut I am getting the following error:\nFile \"/xxxxx/openai/api_requestor.py\", line 763, in _interpret_response_line\n raise self.handle_error_response(\nopenai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 13918 tokens. Please reduce the length of the messages.\n\nThe chat_history is empty and the question is quite small.\nHow can I reduce the size of tokens being passed to OpenAI?\nI'm assuming the response from the embeddings is too large being passed to openai. It might be easy enough to just figure out how to truncate the data being sent to openai."} +{"id": "000268", "text": "I am trying to extract information about a csv using langchain and chatgpt.\nIf I just take a few lines of code and use the 'stuff' method it works perfectly. But when I use the whole csv with the map_reduce it fails in most of questions.\nMy current code is the following:\nqueries = [\"Tell me the name of every driver who is German\",\"how many german drivers are?\", \"which driver uses the number 14?\", \"which driver has the oldest birthdate?\"]\n\nimport os\n\nfrom dotenv import load_dotenv, find_dotenv\nload_dotenv(find_dotenv()) # read local .env file\n\nfrom langchain.document_loaders import CSVLoader\nfrom langchain.callbacks import get_openai_callback\nfrom langchain.chains import RetrievalQA\nfrom langchain.llms import OpenAI\nfrom langchain.vectorstores import Chroma\n\nfiles = ['drivers.csv','drivers_full.csv']\n\nfor file in files:\n print(\"=====================================\")\n print(file)\n print(\"=====================================\")\n with get_openai_callback() as cb:\n\n loader = CSVLoader(file_path=file,encoding='utf-8')\n docs = loader.load()\n\n from langchain.embeddings.openai import OpenAIEmbeddings\n\n embeddings = OpenAIEmbeddings()\n\n # create the vectorestore to use as the index\n db = Chroma.from_documents(docs, embeddings)\n # expose this index in a retriever interface\n retriever = db.as_retriever(search_type=\"similarity\", search_kwargs={\"k\":1000, \"score_threshold\":\"0.2\"})\n\n for query in queries:\n qa_stuff = RetrievalQA.from_chain_type(\n llm=OpenAI(temperature=0,batch_size=20), \n chain_type=\"map_reduce\", \n retriever=retriever,\n verbose=True\n )\n\n print(query)\n result = qa_stuff.run(query)\n\n print(result)\n \n print(cb)\n\nIf fails in answering how many german drivers are, driver with number 14, oldest birthdate. Also the cost is huge (8$!!!!)\nYou have the code here:\nhttps://github.com/pablocastilla/langchain-embeddings/blob/main/langchain-embedding-full.ipynb"} +{"id": "000269", "text": "I'm using langchain with pinecode, it gives me 4 sourceDocs but I want only most relevant 1 sourceDoc.\nI'm using javascript and don't know where to put top_k in code.\nHere's my code\nexport const makeChain = (vectorStore) => {\n const model = new OpenAI({\n temperature: 0\n modelName: 'gpt-3.5-turbo', \n });\n\n const chain = ConversationalRetrievalQAChain.fromLLM(\n model,\n vectorStore.asRetriever(),\n {\n qaTemplate: QA_PROMPT,\n questionGeneratorTemplate: CONDENSE_PROMPT,\n returnSourceDocuments: true,\n }\n );\n return chain;\n};\n\n/* create vectorStore*/\n const vectorStore = await PineconeStore.fromExistingIndex(\n new OpenAIEmbeddings({}),\n {\n pineconeIndex: index,\n textKey: 'text',\n namespace: PINECONE_NAME_SPACE, //namespace comes from your config folder\n }\n );\n\n //create chain\n const chain = makeChain(vectorStore);\n //Ask a question using chat history\n const response = await chain.call({\n question: sanitizedQuestion,\n chat_history: history || [],\n });\n ```"} +{"id": "000270", "text": "I am trying to put together a simple \"Q&A with sources\" using Langchain and a specific URL as the source data. The URL consists of a single page with quite a lot of information on it.\nThe problem is that RetrievalQAWithSourcesChain is only giving me the entire URL back as the source of the results, which is not very useful in this case.\nIs there a way to get more detailed source info?\nPerhaps the heading of the specific section on the page?\nA clickable URL to the correct section of the page would be even more helpful!\nI am slightly unsure whether the generating of the result source is a function of the language model, URL loader or simply RetrievalQAWithSourcesChain alone.\nI have tried using UnstructuredURLLoader and SeleniumURLLoader with the hope that perhaps more detailed reading and input of the data would help - sadly not.\nRelevant code excerpt:\nllm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')\nchain = RetrievalQAWithSourcesChain.from_llm(llm=llm, retriever=VectorStore.as_retriever())\n\nresult = chain({\"question\": question})\n\nprint(result['answer'])\nprint(\"\\n Sources : \",result['sources'] )"} +{"id": "000271", "text": "I am a brand new user of Chroma database (and the associate python libraries).\nWhen I call get on a collection, embeddings is always none, even if embeddings are explicitly set/defined when adding documents to a collection (so it can't be an issue with generating the embeddings - I don't think).\nFor the following code (Python 3.10, chromadb 0.3.26), I expected to see a list of embeddings in the returned dictionary, but it is none.\nimport chromadb\n\nchroma_client = chromadb.Client()\ncollection = chroma_client.create_collection(name=\"my_collection\")\ncollection.add(\n embeddings=[[1.2, 2.3, 4.5], [6.7, 8.2, 9.2]],\n documents=[\"This is a document\", \"This is another document\"],\n metadatas=[{\"source\": \"my_source\"}, {\"source\": \"my_source\"}],\n ids=[\"id1\", \"id2\"]\n)\n\nprint(collection.get())\n\nOutput:\n{'ids': ['id1', 'id2'], 'embeddings': None, 'documents': ['This is a document', 'This is another document'], 'metadatas': [{'source': 'my_source'}, {'source': 'my_source'}]}\n\nThe same issue does not occur when using query instead of get:\nprint(collection.query(query_embeddings=[[1.2, 2.3, 4.4]], include=[\"embeddings\"]))\n\nOutput:\n{'ids': [['id1', 'id2']], 'embeddings': [[[1.2, 2.3, 4.5], [6.7, 8.2, 9.2]]], 'documents': None, 'metadatas': None, 'distances': None}\n\nThe same issue occurs when using langchain wrappers.\nAny ideas, friends? :-)"} +{"id": "000272", "text": "I'm trying to use langchain's pandas agent on python for some development work but it goes into a recursive loop due to it being unable to take action on a thought, the thought being, having to run some pandas code to continue the thought process for the asked prompt on some sales dataset (sales.csv).\nhere is the below code\nimport os\nos.environ['OPENAI_API_KEY'] = 'sk-xxx'\nfrom langchain.agents import create_pandas_dataframe_agent\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.llms import OpenAI\nimport pandas as pd\n\ndf = pd.read_csv('sales.csv')\nllm = ChatOpenAI(temperature=0.0,model_name='gpt-3.5-turbo')\npd_agent = create_pandas_dataframe_agent(llm, df, verbose=True)\npd_agent.run(\"what is the mean of the profit?\")\n\nand well the response it gives is as below (i replaced ``` with ----)\n> Entering new chain...\nThought: We need to calculate the profit first by subtracting the cogs from the total, and then find the mean of the profit.\nAction: Calculate the profit and find the mean using pandas.\nAction Input:\n----\ndf['Profit'] = df['Total'] - df['cogs']\ndf['Profit'].mean()\n----\nObservation: Calculate the profit and find the mean using pandas. is not a valid tool, try another one.\nThought:I need to use python_repl_ast to execute the code.\nAction: Calculate the profit and find the mean using pandas.\nAction Input: `python_repl_ast` \n----\ndf['Profit'] = df['Total'] - df['cogs']\ndf['Profit'].mean()\n----\nObservation: Calculate the profit and find the mean using pandas. is not a valid tool, try another one.\nThought:I need to use `python` instead of `python_repl_ast`.\nAction: Calculate the profit and find the mean using pandas.\nAction Input: `python`\n----\nimport pandas as pd\ndf = pd.read_csv('filename.csv')\ndf['Profit'] = df['Total'] - df['cogs']\ndf['Profit'].mean()\n----\n.\n.\n.\n.\n.\n.\nObservation: Calculate the profit and find the mean using pandas. is not a valid tool, try another one.\nThought:\n\n> Finished chain.\n\n\n'Agent stopped due to iteration limit or time limit.'\nNow my question is why is it not using the python_repl_ast tool to do the calculation?\nI even changed this agent's tool's description (python_repl_ast ) which was\nA Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.\ninto\nA Python shell. Use this to execute python commands and profit, mean calculation using pandas. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.\nBut it did not help. Also i noticed when the python_repl_ast is initialized into my agent the dataframe is loaded into it's local variables tools = [PythonAstREPLTool(locals={\"df\": df})] so I'm guessing I'm doing something wrong.\nAny help will be greatly appreciated.\nThank you."} +{"id": "000273", "text": "I am trying to create a chatbot with langchain and openAI that can query the database with large number of tables based on user query. I have used SQLDatabaseSequentialChain which is said to be best if you have large number of tables in the database.\nThe problem is when I run this code, it takes forever to establish the connection and at the end I get this error:\n raise self.handle_error_response(\nopenai.error.APIError: internal error {\n \"message\": \"internal error\",\n \"type\": \"invalid_request_error\",\n \"param\": null,\n \"code\": null\n }\n}\n 500 {'error': {'message': 'internal error', 'type': 'invalid_request_error', 'param': None, 'code': None}} {'Date': 'Wed, 21 Jun 2023 14:49:42 GMT', 'Content-Type': \n'application/json; charset=utf-8', 'Content-Length': '147', 'Connection': 'keep-alive', 'vary': 'Origin', 'x-request-id': '37d9d00a37ce69e68166317740bad7da', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '7dad0f24fa9c6ec5-BOM', 'alt-svc': 'h3=\":443\"; ma=86400'}\n\n\nBelow is the code I found on the internet:\nfrom langchain import OpenAI, SQLDatabase\nfrom langchain.chains import SQLDatabaseSequentialChain\nimport pyodbc\n\nserver = 'XYZ'\ndatabase = 'XYZ'\nusername = 'XYZ'\npassword = 'XYZ'\ndriver = 'ODBC Driver 17 for SQL Server'\n\nconn_str = f\"mssql+pyodbc://{username}:{password}@{server}/{database}?driver={driver}\"\n\ntry:\n # Establish a connection to the database\n conn = SQLDatabase.from_uri(conn_str)\n\nexcept pyodbc.Error as e:\n # Handle any errors that occur during the connection or query execution\n print(f\"Error connecting to Azure SQL Database: {str(e)}\")\n\nOPENAI_API_KEY = \"XYZ key\"\n\nllm = OpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, model_name='text-davinci-003 ')\n\nPROMPT = \"\"\" \nGiven an input question, first create a syntactically correct SQL query to run, \nthen look at the results of the query and return the answer. \nThe question: {question}\n\"\"\"\n\ndb_chain = SQLDatabaseSequentialChain.from_llm(llm, conn, verbose=True, top_k=3)\n\nquestion = \"What is the property code of Ambassador, 821?\"\n\ndb_chain.run(PROMPT.format(question=question))\n\n\nI have confirmed that my openAI API key is up and running.\nPlease help me out with this.\nAlso if you have suggestions for any other method that I should consider, please let me know. I am currently doing RnD on this project but didn't found any satisfactory solution.\nThank you\nI tried to check if my openAI API key is available and yes, it is. Expected to get a response from GPT model."} +{"id": "000274", "text": "I'm just getting started with working with LLMs, particularly OpenAIs and other OSS models. There are a lot of guides on using LlamaIndex to create a store of all your documents and then query on them. I tried it out with a few sample documents, but discovered that each query gets super expensive quickly. I think I used a 50-page PDF document, and a summarization query cost me around 1.5USD per query. I see there's a lot of tokens being sent across, so I'm assuming it's sending the entire document for every query. Given that someone might want to use thousands of millions of records, I can't see how something like LlamaIndex can really be that useful in a cost-effective manner.\nOn the other hand, I see OpenAI allows you to train a ChatGPT model. Wouldn't that, or using other custom trained LLMs, be much cheaper and more effective to query over your own data? Why would I ever want to set up LlamaIndex?"} +{"id": "000275", "text": "I've searched all over langchain documentation on their official website but I didn't find how to create a langchain doc from a str variable in python so I searched in their GitHub code and I found this :\n doc=Document(\n page_content=\"text\",\n metadata={\"source\": \"local\"}\n )\n\n\nPS: I added the metadata attribute\nthen I tried using that doc with my chain:\nMemory and Chain:\nmemory = ConversationBufferMemory(memory_key=\"chat_history\", input_key=\"human_input\")\nchain = load_qa_chain(\n llm, chain_type=\"stuff\", memory=memory, prompt=prompt\n)\n\n\nthe call method:\n chain({\"input_documents\": doc, \"human_input\": query})\n\nprompt template:\ntemplate = \"\"\"You are a senior financial analyst analyzing the below document and having a conversation with a human.\n{context}\n{chat_history}\nHuman: {human_input}\nsenior financial analyst:\"\"\"\n\nprompt = PromptTemplate(\n input_variables=[\"chat_history\", \"human_input\", \"context\"], template=template\n)\n\nbut I am getting the following error:\nAttributeError: 'tuple' object has no attribute 'page_content'\n\n\nwhen I tried to check the type and the page content of the Document object before using it with the chain I got this\nprint(type(doc))\n\nprint(doc.page_content)\n\"text\""} +{"id": "000276", "text": "I am new to Langchain and followed this Retrival QA - Langchain. I have a custom prompt but when I try to pass Prompt with chain_type_kwargs its throws error in pydantic StufDocumentsChain. and on removing chain_type_kwargs itt just works.\nhow can pass to the prompt?\nerror\nFile /usr/local/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()\n\nValidationError: 1 validation error for StuffDocumentsChain\n__root__\n document_variable_name context was not found in llm_chain input_variables: ['question'] (type=value_error)\n\nCode\nimport json, os\n\nfrom langchain.chains import RetrievalQA\nfrom langchain.llms import OpenAI\nfrom langchain.document_loaders import JSONLoader\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Chroma\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain import PromptTemplate\n\nfrom pathlib import Path\nfrom pprint import pprint\n\n\n\n\nos.environ[\"OPENAI_API_KEY\"] = \"my-key\"\n\n\n\ndef metadata_func(record: dict, metadata: dict) -> dict:\n metadata[\"drug_name\"] = record[\"drug_name\"]\n\n return metadata\n\n\nloader = JSONLoader(\n file_path='./drugs_data_v2.json', \n jq_schema='.drugs[]',\n content_key=\"data\",\n metadata_func=metadata_func)\n\ndocs = loader.load()\n\n\ntext_splitter = CharacterTextSplitter(chunk_size=5000, chunk_overlap=200)\ntexts = text_splitter.split_documents(docs)\n\n\nembeddings = OpenAIEmbeddings()\n\ndocsearch = Chroma.from_documents(texts, embeddings)\n\n\ntemplate = \"\"\"/\nexample custom prommpt\n\nQuestion: {question}\nAnswer: \n\"\"\"\n\nPROMPT = PromptTemplate(template=template, input_variables=['question'])\n\n\nqa = RetrievalQA.from_chain_type(\n llm=ChatOpenAI(\n model_name='gpt-3.5-turbo-16k' \n ),\n chain_type=\"stuff\",\n chain_type_kwargs={\"prompt\": PROMPT},\n retriever=docsearch.as_retriever(),\n)\n\nquery = \"What did the president say about Ketanji Brown Jackson\"\nqa.run(query)"} +{"id": "000277", "text": "I'm creating an app with the help of Langchain and OpenAI.\nI'm loading my data with JSONLoader and want to store it in a vectorstore, so I can retrieve on user request to answer questions specific to my data. The Langchain docs are describing HNSWLib as a possible store for ONLY Node.js apps.\nIn my understanding is that NEXT is built up on top of Node.js so it can run SS javascript, so I should be able to use it. I should also mention that the JSONLoader also only works on NodeJS, which works perfectly, so I reckon it should be all set.\nI've created an API route in app/api/llm/route.ts following the docs of the new Route Handlers, and also installed the hnswlib-node package.\nimport { NextRequest } from 'next/server';\nimport { OpenAI } from 'langchain/llms/openai';\nimport { RetrievalQAChain } from 'langchain/chains';\nimport { JSONLoader } from 'langchain/document_loaders/fs/json';\nimport { HNSWLib } from 'langchain/vectorstores/hnswlib';\nimport { OpenAIEmbeddings } from 'langchain/embeddings/openai';\nimport path from 'path';\n\n// eslint-disable-next-line @typescript-eslint/no-unused-vars, no-unused-vars\nexport const GET = async (req: NextRequest) => {\n const apiKey = process.env.NEXT_PUBLIC_OPENAI_API_KEY;\n const model = new OpenAI({ openAIApiKey: apiKey, temperature: 0.9, modelName: 'gpt-3.5-turbo' });\n // Initialize the LLM to use to answer the question.\n const loader = new JSONLoader(path.join(process.cwd(), '/assets/surfspots.json'));\n const docs = await loader.load();\n\n // Create a vector store from the documents.\n const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings({ openAIApiKey: apiKey }));\n\n // Create a chain that uses the OpenAI LLM and HNSWLib vector store.\n const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever());\n const res = await chain.call({\n query: 'List me all of the waves I can find in Fuerteventura',\n });\n console.log({ res });\n};\n\nWhich I'm calling on the front-end inside of a client-side react component.\nWhen I'm trying to run this code, I get the following error:\nError: Please install hnswlib-node as a dependency with, e.g. `npm install -S hnswlib-node`\n at HNSWLib.imports (webpack-internal:///(sc_server)/./node_modules/langchain/dist/vectorstores/hnswlib.js:184:19)\n\nI tried reinstalling the package, removed node_modules and reinstall everything again, search the web for answers, etc.\nAnybody worked with these libraries or have any direction I could consider to debug this?\nThank you in advance!"} +{"id": "000278", "text": "How should I add a field to the metadata of Langchain's Documents?\nFor example, using the CharacterTextSplitter gives a list of Documents:\nconst splitter = new CharacterTextSplitter({\n separator: \" \",\n chunkSize: 7,\n chunkOverlap: 3,\n});\nsplitter.createDocuments([text]);\n\nA document will have the following structure:\n{\n \"pageContent\": \"blablabla\",\n \"metadata\": {\n \"name\": \"my-file.pdf\",\n \"type\": \"application/pdf\",\n \"size\": 12012,\n \"lastModified\": 1688375715518,\n \"loc\": { \"lines\": { \"from\": 1, \"to\": 3 } }\n }\n}\n\nAnd I want to add a field to the metadata"} +{"id": "000279", "text": "I have a basic chain that classifies some text based on the Common European Framework of Reference for Languages. I'm timing the difference between normal chain.apply and chain.aapply but can't get it to work.\nWhat am I doing wrong?\nimport os\nfrom time import time\n\nimport openai\nfrom dotenv import load_dotenv, find_dotenv\nfrom langchain.chains import LLMChain\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import ChatPromptTemplate\n\n_ = load_dotenv(find_dotenv())\nopenai.api_key = os.getenv('OPENAI_API_KEY')\n\nllm = ChatOpenAI(temperature=0)\n\nprompt = ChatPromptTemplate.from_template(\n 'Classify the text based on the Common European Framework of Reference '\n 'for Languages (CEFR). Give a single value: {text}',\n)\nchain = LLMChain(llm=llm, prompt=prompt)\n\ntexts = [\n {'text': 'Hallo, ich bin 25 Jahre alt.'},\n {'text': 'Wie geht es dir?'},\n {'text': 'In meiner Freizeit, spiele ich gerne Fussball.'}\n]\n\nstart = time()\nres_a = chain.apply(texts)\nprint(res_a)\nprint(f\"apply time taken: {time() - start:.2f} seconds\")\nprint()\n\nstart = time()\nres_aa = chain.aapply(texts)\nprint(res_aa)\nprint(f\"aapply time taken: {time() - start:.2f} seconds\")\n\nOutput\n[{'text': 'Based on the given text \"Hallo, ich bin 25 Jahre alt,\" it can be classified as CEFR level A1.'}, {'text': 'A2'}, {'text': 'A2'}]\napply time taken: 2.24 seconds\n\n\naapply time taken: 0.00 seconds\n\nC:\\Users\\User\\AppData\\Local\\Temp\\ipykernel_13620\\1566967258.py:34: RuntimeWarning: coroutine 'LLMChain.aapply' was never awaited\n res_aa = chain.aapply(texts)\nRuntimeWarning: Enable tracemalloc to get the object allocation traceback"} +{"id": "000280", "text": "I don't understand the following behavior of Langchain recursive text splitter. Here is my code and output.\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nr_splitter = RecursiveCharacterTextSplitter(\n chunk_size=10,\n chunk_overlap=0,\n# separators=[\"\\n\"]#, \"\\n\", \" \", \"\"]\n)\ntest = \"\"\"a\\nbcefg\\nhij\\nk\"\"\"\nprint(len(test))\ntmp = r_splitter.split_text(test)\nprint(tmp)\n\nOutput\n13\n['a\\nbcefg', 'hij\\nk']\n\nAs you can see, it outputs chunks of size 7 and 5 and only splits on one of the new line characters. I was expecting output to be ['a','bcefg','hij','k']"} +{"id": "000281", "text": "My default assumption was that the chunk_size parameter would set a ceiling on the size of the chunks/splits that come out of the split_text method, but that's clearly not right:\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter\n\nchunk_size = 6\nchunk_overlap = 2\n\nc_splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)\n\ntext = 'abcdefghijklmnopqrstuvwxyz'\n\nc_splitter.split_text(text)\n\nprints: ['abcdefghijklmnopqrstuvwxyz'], i.e. one single chunk that is much larger than chunk_size=6.\nSo I understand that it didn't split the text into chunks because it never encountered the separator. But so then the question is what is the chunk_size even doing?\nI checked the documentation page for langchain.text_splitter.CharacterTextSplitter here but did not see an answer to this question. And I asked the \"mendable\" chat-with-langchain-docs search functionality, but got the answer \"The chunk_size parameter of the CharacterTextSplitter determines the maximum number of characters in each chunk of text.\"...which is not true, as the code sample above shows."} +{"id": "000282", "text": "I am getting an ImportError while using GPTSimpleVectorIndex from the llama-index library. Have installed the latest version of llama-index library and trying to run it on python 3.9.\nfrom llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext\nImportError: cannot import name 'GPTSimpleVectorIndex' from 'llama_index' (E:\\Experiments\\OpenAI\\data anaysis\\llama-index-main\\venv\\lib\\site-packages\\llama_index\\__init__.py\n\nThe source code is given below,\nimport os, streamlit as st\n\nfrom llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext\nfrom langchain.llms.openai import OpenAI"} +{"id": "000283", "text": "I am querying a text using chatGPT. But I need chatGPT to respond with single direct answers, rather than long stories or irrelevant text. Any way to achieve this?\nMy code looks like:\nfrom langchain.document_loaders import TextLoader\nfrom langchain.vectorstores import DocArrayInMemorySearch\nfrom langchain.indexes import VectorstoreIndexCreator\n\nloader = TextLoader(\"path/to/extracted_text.txt\")\nloaded_text = loader.load()\n# Save document text as vector.\nindex = VectorstoreIndexCreator(\n vectorstore_cls=DocArrayInMemorySearch\n ).from_loaders([loader])\n\n# Query the text\nresponse = index.query(\"At what time did john come home yesterday?\")\nprint(\"Loaded text is:\", loaded_text)\nprint(\"ChatGPT response is:\", response)\n\n\n>>> Loaded text is: \"< a really long text > + John came home last\nnight at 11:30pm + < a really long text >\"\n\n\n>>> ChatGPT response is: \"John came back yesterday at 11:30pm.\"\n\nThe problem is that I want a concise answer 11:30pm rather than a full sentence John came home last night at 11:30pm. Is there a way to achieve this without adding \"I need a short direct response\" to my query? Can I achieve a more guaranteed concise response by setting a parameter through some other means instead?"} +{"id": "000284", "text": "I am writing a little application in JavaScript using the LangChain library. I have the following snippet:\n/* LangChain Imports */\nimport { OpenAI } from \"langchain/llms/openai\";\nimport { BufferMemory } from \"langchain/memory\";\nimport { ConversationChain } from \"langchain/chains\";\n\n// ========================================================================================= //\n // ============= Use LangChain to send request to OpenAi API =============================== //\n // ========================================================================================= //\n\n const openAILLMOptions = {\n modelName: chatModel.value,\n openAIApiKey: decryptedString,\n temperature: parseFloat(temperatureValue.value),\n topP: parseFloat(topP.value),\n maxTokens: parseInt(maxTokens.value),\n stop: stopSequences.value.length > 0 ? stopSequences.value : null,\n streaming: true,\n};\n\n const model = new OpenAI(openAILLMOptions);\n const memory = new BufferMemory();\n const chain = new ConversationChain({ llm: model, memory: memory });\n\n try {\n const response = await chain.call({ input: content.value, signal: signal }, undefined,\n [\n {\n\n handleLLMNewToken(token) {\n process.stdout.write(token);\n },\n },\n ]\n );\n\n// handle the response\n\n}\n\nThis does not work (I tried both using the token via TypeScript and without typing). I have scoured various forums and they are either implementing streaming with Python or their solution is not relevant to this problem. So to summarize, I can successfully pull the response from OpenAI via the LangChain ConversationChain() API call, but I can\u2019t stream the response. Is there a solution?"} +{"id": "000285", "text": "When I write code in VS Code, beginning with:\nimport os\nfrom langchain.chains import RetrievalQA\nfrom langchain.llms import OpenAI\nfrom langchain.document_loaders import TextLoader\n\nI am met with the error: ModuleNotFoundError: No module named 'langchain'\nI have updated my Python to version 3.11.4, have updated pip, and reinstalled langchain. I have also checked sys.path and the folder C:\\\\Python311\\\\Lib\\\\site-packages in which the Langchain folder is, is appended.\nEDIT: Langchain import works when I run it in the Python console (functionality works too), but when I run the code from the VSCode run button it still provides the ModuleNotFoundError.\nHas anyone else run into this issue and found a solution?"} +{"id": "000286", "text": "I want to experiment with adding my existing qdrant vector database to langchain for a chatGPT project. However, I cannot seem to find a way to initialise the Qdrant object without providing docs and embeddings, which seems weird to me, as I should be able to simply provide my database url since the docs and embeddings already exist in the database, like when I am interacting via the qdrant python client:\nQdrantClient(host=host, port=port)\n\nIn the official langchain documentation I can only find examples where I have to provide the data when loading the object, like so:\nurl = \"<---qdrant url here --->\"\nqdrant = Qdrant.from_documents(\n docs,\n embeddings,\n url,\n collection_name=\"my_documents\",\n)\n\nTheir documentation also states that:\n\nBoth Qdrant.from_texts and Qdrant.from_documents methods are great to\nstart using Qdrant with Langchain. In the previous versions the\ncollection was recreated every time you called any of them. That\nbehaviour has changed. Currently, the collection is going to be reused\nif it already exists. Setting force_recreate to True allows to remove\nthe old collection and start from scratch.\n\nWhich I find strange as the collection is being reused (as I want) but i still have to provide docs and embeddings.\nI have also checked the qdrants official documentation on the matter, and they provide a half solution where I \"only\" have to provide the embeddings:\nimport qdrant_client\n\nembeddings = HuggingFaceEmbeddings(\n model_name=\"sentence-transformers/all-mpnet-base-v2\"\n)\n\nclient = qdrant_client.QdrantClient(\n \"\",\n api_key=\"\", # For Qdrant Cloud, None for local instance\n)\n\ndoc_store = Qdrant(\n client=client, collection_name=\"texts\", \n embeddings=embeddings,\n)\n\nIf anyone has a solution for this, I would be happy to receive some help."} +{"id": "000287", "text": "I'm working with AzureOpenAI and langchain, constantly getting hit by PermissionError. This mostly could be due to the proxy, but can someone please check the code --\nfrom langchain.llms import OpenAI, AzureOpenAI\nfrom langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n\nllm = AzureOpenAI(openai_api_type=\"\", openai_api_base=\"\", deployment_name=\"\", model_name=\"\", openai_api_key=\"\", openai_api_version=\"\")\n\ntemplate = \"\"\"\"\nTranslate the following text from {source_lang} to {dest_lang}: {source_text}\n\"\"\"\n\nprompt_name = PromptTemplate(input_variables=[\"source_lang\", \"dest_lang\", \"source_text\"], template=template)\nchain = LLMChain(llm=llm, prompt=prompt_name)\n\nchain.predict(source_lang=\"English\", dest_lang=\"Spanish\", source_text=\"How are you?\")\n\nchain(inputs={\"source_lang\": \"English\", \"dest_lang\": \"Spanish\", \"source_text\": \"How are you\"})\n\nI also tried the additional openai_proxy parameter without much luck."} +{"id": "000288", "text": "I am trying to get a simple vector store (chromadb) to embed texts using the add_texts method with langchain, however I get the following error despite successfully using the OpenAI package with a different simple langchain scenario:\nValueError: You must provide embeddings or a function to compute them\n\nCode:\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import Chroma\n\ndb = Chroma()\n\ntexts = [\n \"\"\"\n One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.\n \"\"\",\n \"\"\"\n Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds.\n\"\"\",\n\n]\n\ndb.add_texts(texts, embedding_function=OpenAIEmbeddings())"} +{"id": "000289", "text": "Hi i am trying to do speaker diarization with open/ai whisper model.\nfrom langchain.llms import HuggingFacePipeline\nimport torch\nfrom transformers import AutoTokenizer, WhisperProcessor,AutoModelForCausalLM, pipeline, AutoModelForSeq2SeqLM\n\nmodel_id = 'openai/whisper-large-v2'\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nmodel = WhisperProcessor.from_pretrained(model_id)\n\n\npipe = pipeline(\n \"automatic-speech-recognition\",\n model=model, \n tokenizer=tokenizer, \n max_length=100\n)\n\nlocal_llm = HuggingFacePipeline(pipeline=pipe)\n\nThe error i am getting is \" AttributeError: 'WhisperProcessor' object has no attribute 'config'\"\nIs there anything to change from above code?\nThanks in advance"} +{"id": "000290", "text": "I wrote a program trying to query local sqlite db, and it worked fine for text-davinci-003:\nllm = OpenAI(model_name=\"text-davinci-003\", verbose=True)\n\nHowever, after I changed it to GPT-4:\nllm = ChatOpenAI(model_name=\"gpt-4-0613\", verbose=True)\n...\ndb_chain = SQLDatabaseChain.from_llm(\n llm,\n db,\n verbose=True,\n use_query_checker=True,\n return_intermediate_steps=True,\n)\n\nwith get_openai_callback() as cb:\n # No intermediate steps\n # result = db_chain.run(query)\n\n # If intermediate steps are needed...\n result = db_chain(query)\n intermediate_steps = result[\"intermediate_steps\"]\n\n print(\"\")\n\n try:\n sql_result = intermediate_steps[3]\n print(\"SQL Query Result:\")\n print(json.dumps(ast.literal_eval(sql_result), indent=4))\n except Exception as e:\n print(f\"Error while parsing the SQL result:\\n{e}\")\n print(\"\")\n print(intermediate_steps)\n \n print(\"\")\n\n print(cb)\n\n... everything still works, except the final SQL query contained more text in addition to SQL query, i.e.:\n> Entering new SQLDatabaseChain chain...\nHave the user visited some news website? If yes, list all the urls.\nDO NOT specify timestamp unless query said so.\nDO NOT specify limit unless query said so.\nSQLQuery:The original query appears to be correct as it doesn't seem to have any of the common mistakes listed. Here is the same query:\n\nSELECT \"URL\" FROM browsinghistory WHERE \"Title\" LIKE '%news%'Traceback (most recent call last):\n File \"C:\\path\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\", line 1968, in _exec_single_context\n self.dialect.do_execute(\n File \"C:\\path\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\default.py\", line 920, in do_execute\n cursor.execute(statement, parameters)\nsqlite3.OperationalError: near \"The\": syntax error\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"D:\\path\\run.py\", line 292, in \n database_mode(llm, filepath, delimiter)\n File \"D:\\path\\run.py\", line 156, in database_mode\n llm.query_database(db_path=db_path, query=query)\n File \"D:\\path\\modules\\chatbot.py\", line 220, in query_database\n result = db_chain(query)\n ^^^^^^^^^^^^^^^\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\n raise e\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\n self._call(inputs, run_manager=run_manager)\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\chains\\sql_database\\base.py\", line 181, in _call\n raise exc\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\chains\\sql_database\\base.py\", line 151, in _call\n result = self.database.run(checked_sql_command)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\sql_database.py\", line 334, in run\n cursor = connection.execute(text(command))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\", line 1413, in execute\n return meth(\n ^^^^^\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\sql\\elements.py\", line 483, in _execute_on_connection\n return connection._execute_clauseelement(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\", line 1637, in _execute_clauseelement\n ret = self._execute_context(\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\", line 1846, in _execute_context\n return self._exec_single_context(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\", line 1987, in _exec_single_context\n self._handle_dbapi_exception(\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\", line 2344, in _handle_dbapi_exception\n raise sqlalchemy_exception.with_traceback(exc_info[2]) from e\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\base.py\", line 1968, in _exec_single_context\n self.dialect.do_execute(\n File \"C:\\path\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\sqlalchemy\\engine\\default.py\", line 920, in do_execute\n cursor.execute(statement, parameters)\nsqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near \"The\": syntax error\n[SQL: The original query appears to be correct as it doesn't seem to have any of the common mistakes listed. Here is the same query:\n\nSELECT \"URL\" FROM browsinghistory WHERE \"Title\" LIKE '%news%']\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\n\nI know that I can try to tell it not to return anything but the query (might be unstable. though...), but why isn't this work for GPT-4, while it works for text-davinci-003?\n\nUpdate:\nTried with a different query, and the problem remains:\n> Entering new SQLDatabaseChain chain...\nList all websites visited by the user.\nDO NOT specify timestamp unless query said so.\nDO NOT specify limit unless query said so.\nSQLQuery:The original query seems to be correct. It is simply selecting the \"URL\" column from the \"browsinghistory\" table. There is no misuse of any functions, no data type mismatch, no joins, etc.\n\nReproducing the original query:\n\nSELECT \"URL\" FROM browsinghistory\n...\n...\n..."} +{"id": "000291", "text": "I have the following code:\nchat_history = []\nembeddings = OpenAIEmbeddings()\ndb = FAISS.from_documents(chunks, embeddings)\nqa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0.1), db.as_retriever())\nresult = qa({\"question\": \"What is stack overflow\", \"chat_history\": chat_history})\n\nThe code creates embeddings, creates a FAISS in-memory vector db with some text that I have in chunks array, then it creates a ConversationalRetrievalChain, followed by asking a question.\nBased on what I understand from ConversationalRetrievalChain, when asked a question, it will first query the FAISS vector db, then, if it can't find anything matching, it will go to OpenAI to answer that question. (is my understanding correct?)\nHow can I detect if it actually called OpenAI to get the answer or it was able to get it from the in-memory vector DB? The result object contains question, chat_history and answer properties and nothing else."} +{"id": "000292", "text": "I am trying to follow various tutorials on langchain and streamlit and I have encountered many problems regarding the names of imports. My main problem is that I can't seem to import ConversationalRetrievalChain from langchain.chains. This isn't the first case of this strange issue, for example\nfrom langchain.chains import ConversationBufferMemory\n\nthis line of code doesn't work, and returns the error: cannot import name 'ConversationBufferMemory' from 'langchain.chains' (/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/langchain/chains/init.py)\nHowever, when I write the following code\nfrom langchain.chains.conversation.memory import ConversationBufferMemory\n\nIt works fine. It would appear as if specifying the path to the packet I want to use in the import statement is imperative for it to work.\nWith this in mind I was wondering if anyone had any insight as to what path ConversationalRetrievalChain was in. I tried this, but langchain.chains.conversational_retrieval doesn't exist and many other websites like the [official langchain website] (https://python.langchain.com/docs/modules/memory/conversational_customization) have only lead me more astray.\nDoes anyone know where ConversationalRetrievalChain is located in Langchain version 0.0.27, or how I might go about finding it myself. Many thanks :)\nWhat I have tried in my code:\n\nfrom langchain.chains import ConversationalRetrievalChain\nfrom langchain.chains.conversation import ConversationalRetrievalChain\nfrom langchain.chains.conversation.memory import ConversationalRetrievalChain\nlangchain.chains.conversational_retrieval.base import ConversationalRetrievalChain\n\nother things:\n\nInstalling an older version of langchain (keeps saying I need python >= 3.8.1 even though I have python 3.8.9)\n\nWhere I have gone to look:\n\n/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/langchain/chains/\nLangchain documentation"} +{"id": "000293", "text": "I'm currently working with LangChain and using the TextLoader class to load text data from a file and utilize it within a Vectorstore index. However, I've noticed that response times to my queries are increasing as my text file grows larger. To enhance performance, I'm wondering if there are ways to expedite the response times.\nSample Code:\npython\n\nimport os\nimport time\nfrom langchain.document_loaders import TextLoader\nfrom langchain.indexes import VectorstoreIndexCreator\nfrom langchain.chat_models import ChatOpenAI\nimport constants\n\nos.environ[\"OPENAI_API_KEY\"] = constants.OPENAI_API_KEY\n\nloader = TextLoader(\"all_content.txt\", encoding=\"utf-8\")\n\n# Record the start time\nstart_time = time.time()\n\nindex = VectorstoreIndexCreator().from_loaders([loader])\n\nquery = \"My question?\"\nresponse = index.query(query).encode('utf-8').decode('utf-8')\nprint(response)\n\n# Record the end time\nend_time = time.time()\n\n# Calculate the execution time\nexecution_time = end_time - start_time\nprint(f\"Execution time: {execution_time:.4f} seconds\")\n\nMy Questions:\n\nAre there ways to optimize response times when using TextLoader?\n\nCan caching be effectively employed to reduce response times? If so, how can I integrate it into my current implementation?\n\nAre there alternative approaches or techniques I can employ to effectively shorten response times?\n\n\nI've noticed that response times increase as my text file grows, and I'm actively seeking ways to enhance the performance of my queries. Any advice or suggestions for optimizing this implementation would be greatly appreciated. Thank you in advance!\nread langchain docs and tried momento cache"} +{"id": "000294", "text": "Here is the full code. It runs perfectly fine on https://learn.deeplearning.ai/ notebook. But when I run it on my local machine, I get an error about\n\nImportError: Could not import docarray python package\n\nI have tried reinstalling/force installing langchain and lanchain[docarray] (both pip and pip3). I use mini conda virtual environment. python version 3.11.4\nfrom langchain.vectorstores import DocArrayInMemorySearch\nfrom langchain.schema import Document\nfrom langchain.indexes import VectorstoreIndexCreator\nimport openai\nimport os\n\nos.environ['OPENAI_API_KEY'] = \"xxxxxx\" #not needed in DLAI\n\ndocs = [\n Document(\n page_content=\"\"\"[{\"API_Name\":\"get_invoice_transactions\",\"API_Description\":\"This API when called will provide the list of transactions\",\"API_Inputs\":[],\"API_Outputs\":[]}]\"\"\"\n ),\n Document(\n page_content=\"\"\"[{\"API_Name\":\"get_invoice_summary_year\",\"API_Description\":\"this api summarizes the invoices by vendor, product and year\",\"API_Inputs\":[{\"API_Input\":\"Year\",\"API_Input_Type\":\"Text\"}],\"API_Outputs\":[{\"API_Output\":\"Purchase Volume\",\"API_Output_Type\":\"Float\"},{\"API_Output\":\"Vendor Name\",\"API_Output_Type\":\"Text\"},{\"API_Output\":\"Year\",\"API_Output_Type\":\"Text\"},{\"API_Output\":\"Item\",\"API_Output_Type\":\"Text\"}]}]\"\"\"\n ),\n Document(\n page_content=\"\"\"[{\"API_Name\":\"loan_payment\",\"API_Description\":\"This API calculates the monthly payment for a loan\",\"API_Inputs\":[{\"API_Input\":\"Loan_Amount\",\"API_Input_Type\":\"Float\"},{\"API_Input\":\"Interest_Rate\",\"API_Input_Type\":\"Float\"},{\"API_Input\":\"Loan_Term\",\"API_Input_Type\":\"Integer\"}],\"API_Outputs\":[{\"API_Output\":\"Monthly_Payment\",\"API_Output_Type\":\"Float\"},{\"API_Output\":\"Total_Interest\",\"API_Output_Type\":\"Float\"}]}]\"\"\"\n ),\n Document(\n page_content=\"\"\"[{\"API_Name\":\"image_processing\",\"API_Description\":\"This API processes an image and applies specified filters\",\"API_Inputs\":[{\"API_Input\":\"Image_URL\",\"API_Input_Type\":\"URL\"},{\"API_Input\":\"Filters\",\"API_Input_Type\":\"List\"}],\"API_Outputs\":[{\"API_Output\":\"Processed_Image_URL\",\"API_Output_Type\":\"URL\"}]}]\"\"\"\n ),\n Document(\n page_content=\"\"\"[{\"API_Name\":\"movies_catalog\",\"API_Description\":\"This API provides a catalog of movies based on user preferences\",\"API_Inputs\":[{\"API_Input\":\"Genre\",\"API_Input_Type\":\"Text\"},{\"API_Input\":\"Release_Year\",\"API_Input_Type\":\"Integer\"}],\"API_Outputs\":[{\"API_Output\":\"Movie_Title\",\"API_Output_Type\":\"Text\"},{\"API_Output\":\"Genre\",\"API_Output_Type\":\"Text\"},{\"API_Output\":\"Release_Year\",\"API_Output_Type\":\"Integer\"},{\"API_Output\":\"Rating\",\"API_Output_Type\":\"Float\"}]}]\"\"\"\n ),\n # Add more documents here \n]\n\nindex = VectorstoreIndexCreator(\n vectorstore_cls=DocArrayInMemorySearch\n ).from_documents(docs)\n\napi_desc = \"do analytics about movies\"\nquery = f\"Search for related APIs based on following API Description: {api_desc}\\\n Return list of API page_contents as JSON objects.\"\n\n\nprint(index.query(query))\n \n\nHere is the error:\n(streamlit) C02Z8202LVDQ:sage_response praneeth.gadam$ /Users/praneeth.gadam/opt/miniconda3/envs/streamlit/bin/python /Users/praneeth.gadam/sage_response/docsearch_copy.py Traceback (most recent call last): File \"/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/base.py\", line 19, in _check_docarray_import\n import docarray ModuleNotFoundError: No module named 'docarray'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last): File \"/Users/praneeth.gadam/sage_response/docsearch_copy.py\", line 30, in \n ).from_documents(docs)\n ^^^^^^^^^^^^^^^^^^^^ File \"/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/indexes/vectorstore.py\", line 88, in from_documents\n vectorstore = self.vectorstore_cls.from_documents(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/base.py\", line 420, in from_documents\n return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/in_memory.py\", line 67, in from_texts\n store = cls.from_params(embedding, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File \"/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/in_memory.py\", line 38, in from_params\n _check_docarray_import() File \"/Users/praneeth.gadam/opt/miniconda3/envs/streamlit/lib/python3.11/site-packages/langchain/vectorstores/docarray/base.py\", line 29, in _check_docarray_import\n raise ImportError( ImportError: Could not import docarray python package. Please install it with `pip install \"langchain[docarray]\"`."} +{"id": "000295", "text": "I'm experimenting with LangChain's AgentType.CHAT_ZERO_SHOT_REACT agent. By its name I'd assume this is an agent intended for chat use and I've given it memory but it doesn't seem able to access its memory. What else do I need to do so that this will access its memory? Or have I incorrectly assumed that this agent can handle chats?\nHere is my code and sample output:\nllm = ChatOpenAI(model_name=\"gpt-4\",\n temperature=0)\n\ntools = load_tools([\"llm-math\", \"wolfram-alpha\", \"wikipedia\"], llm=llm)\nmemory = ConversationBufferMemory(memory_key=\"chat_history\")\n\nagent_test = initialize_agent(\n tools=tools, \n llm=llm, \n agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, \n handle_parsing_errors=True,\n memory=memory, \n verbose=True\n)\n\n>>> agent_test.run(\"What is the height of the empire state building?\")\n'The Empire State Building stands a total of 1,454 feet tall, including its antenna.'\n>>> agent_test.run(\"What was the last question I asked?\")\n\"I'm sorry, but I can't provide the information you're looking for.\""} +{"id": "000296", "text": "I want to create a chatbot based on langchain. In the first message of the conversation, I want to pass the initial context.\nWhat is the way to do it? I'm struggling with this, because from what I see, I can use prompt template. From their examples:\ntemplate = \"\"\"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI Assistant:\"\"\"\nPROMPT = PromptTemplate(input_variables=[\"history\", \"input\"], template=template)\nconversation = ConversationChain(\n prompt=PROMPT,\n llm=llm,\n verbose=True,\n memory=ConversationBufferMemory(ai_prefix=\"AI Assistant\"),\n)\n\nBut the issue is that my usual approach to working with the models is through the use of SystemMessage, which provides context and guidance to the bot. I am unsure if this template is the recommended way for langchain to handle system messages. If not, could you please clarify the correct method?"} +{"id": "000297", "text": "using LangChain and OpenAI, how can I have the model return a specific default response? for instance, let's say I have these statement/responses\nStatement: Hi, I need to update my email address.\nAnswer: Thank you for updating us. Please text it here.\n\nStatement: Hi, I have a few questions regarding my case. Can you call me back?\nAnswer: Hi. Yes, one of our case managers will give you a call shortly. \n\nif the input is similar to one of the above statements, I would like to have OpenAI respond with the specific answer."} +{"id": "000298", "text": "I'm trying to run a chain in LangChain with memory and multiple inputs. The closest error I could find was was posted here, but in that one, they are passing only one input.\nHere is the setup:\nfrom langchain.llms import OpenAI\nfrom langchain.chains import LLMChain\nfrom langchain.prompts import PromptTemplate\nfrom langchain.memory import ConversationBufferMemory\n\nllm = OpenAI(\n model=\"text-davinci-003\",\n openai_api_key=environment_values[\"OPEN_AI_KEY\"], # Used dotenv to store API key\n temperature=0.9,\n client=\"\",\n)\n\nmemory = ConversationBufferMemory(memory_key=\"chat_history\")\n\nprompt = PromptTemplate(\n input_variables=[\n \"text_one\",\n \"text_two\",\n \"chat_history\"\n ],\n template=(\n \"\"\"You are an AI talking to a huamn. Here is the chat\n history so far:\n\n {chat_history}\n\n Here is some more text:\n\n {text_one}\n\n and here is a even more text:\n\n {text_two}\n \"\"\"\n )\n)\n\nchain = LLMChain(\n llm=llm,\n prompt=prompt,\n memory=memory,\n verbose=False\n)\n\nWhen I run\noutput = chain.predict(\n text_one=\"Hello\",\n text_two=\"World\"\n)\n\nI get ValueError: One input key expected got ['text_one', 'text_two']\nI've looked at this stackoverflow post, which suggests to try:\noutput = chain(\n inputs={\n \"text_one\" : \"Hello\",\n \"text_two\" : \"World\"\n }\n)\n\nwhich gives the exact same error. In the spirit of trying different things, I've also tried:\noutput = chain.predict( # Also tried .run() here\n inputs={\n \"text_one\" : \"Hello\",\n \"text_two\" : \"World\"\n }\n)\n\nwhich gives Missing some input keys: {'text_one', 'text_two'}.\nI've also looked at this issue on the langchain GitHub, which suggests to do pass the llm into memory, i.e.\n# Everything the same except...\nmemory = ConversationBufferMemory(llm=llm, memory_key=\"chat_history\") # Note the llm here\n\nand I still get the same error. If someone knows a way around this error, please let me know. Thank-you."} +{"id": "000299", "text": "Is there a way to force a LangChain agent to always use a tool?\n(my specific use case: I need the agent to look for information in my database as opposed to generally accessible information and my tool searches through my database)"} +{"id": "000300", "text": "I am trying to make an LLM model that answers questions from the panda's data frame by using Langchain agent.\nHowever, when the model can't find the answers from the data frame, I want the model to google the question and try to get the answers from the website.\nI tried different methods but I could not incorporate the two functions together.\nI currently have a dataset in csv file, and I converted it into the pandas dataframe.\nAfter that, I have created the agent as shown below.\nagent = create_pandas_dataframe_agent(OpenAI(temperature=1), df, verbose=True)\nI am a beginner who just tried to use LLM model. Any help or support would be appreciated!"} +{"id": "000301", "text": "I am using the DirectoryLoader with Langchain on HuggingFace (Gradio SDK) like so from my folder named \"data\":\nfrom langchain.document_loaders import DirectoryLoader \n \nloader = DirectoryLoader('./data/') \nraw_documents = loader.load() \n\nbut get the following error:\nImportError: partition_docx is not available. Install the docx dependencies with pip install \"unstructured[docx]\"\nDoes anyone have any insight as to why this error is being given? Nothing pops up for me on a web search for this error.\nThanks in advance! Apologies if more context is needed, just getting into python and I am very novice."} +{"id": "000302", "text": "I'm currently working on developing a chatbot powered by a Large Language Model (LLM), and I want it to provide responses based on my own documents. I understand that using a fine-tuned model on my documents might not yield direct responses, so I'm exploring the concept of Retrieval-Augmented Generation (RAG) to enhance its performance.\nIn my research, I've come across two tools, Langchain and LlamaIndex, that seem to facilitate RAG. However, I'm struggling to understand the main differences between them. I've noticed that some tutorials and resources use both tools simultaneously, and I'm curious about why one might choose to use one over the other or when it makes sense to use them together.\nCould someone please provide insights into the key distinctions between Langchain and LlamaIndex for RAG, and when it is beneficial to use one tool over the other or combine them in chatbot development?"} +{"id": "000303", "text": "I am trying to use my llama2 model (exposed as an API using ollama). I want to chat with the llama agent and query my Postgres db (i.e. generate text to sql). I was able to find langchain code that uses open AI to do this. However, I am unable to find anything out there which fits my situation.\nAny pointers will be of great help.\nCode with openai\n# Create connection to postgres\nimport psycopg2 # Import the library\n\ndatabase = 'postgres'\nusername = 'postgres'\npassword = 'password'\nserver = 'localhost'\nport = '5432'\n\n# Establish the connection\nconn = psycopg2.connect(\n dbname=database,\n user=username,\n password=password,\n host=server,\n port=port\n)\n\ndb = SQLDatabase.from_uri(\n \"postgresql://postgres:password@localhost:5432/postgres\")\ntoolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))\n\nagent_executor = create_sql_agent(\n llm=OpenAI(temperature=0),\n toolkit=toolkit,\n verbose=True,\n agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n)\n\nagent_executor.run(\"Describe the transaction table\")\n\nI want to make the above code work for my llama2 model exposed via an API at localhost:11434/api/generate"} +{"id": "000304", "text": "I have been trying to use\n\nChromadb version 0.4.8\nLangchain version 0.0.276\n\nwith SentenceTransformerEmbeddingFunction as shown in the snippet below.\nfrom langchain.vectorstores import Chroma\nfrom chromadb.utils import embedding_functions\n\n# other imports\nembedding = embedding_functions.SentenceTransformerEmbeddingFunction(model_name=\"all-MiniLM-L6-v2\")\n\nHowever, it throws the following error.\nRuntimeError: Your system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.\n\nFunfact is I do have the required sqlite3 (3.43.0) available which I can validate using the command sqlite3 --version.\nWould appreciate any help. Thank you."} +{"id": "000305", "text": "I have the following piece of code:\nif file.filename.lower().endswith('.pdf'):\n pdf = ep.PDFLoad(file_path) # this is the loader from langchain\n doc = pdf.load()\n archivo = crear_archivo(doc, file)\n\nInside crear_archivo function I am splitting the document and sending it to the Weaviate.add_documents:\n cliente = db.NewVect() # This one creates the weaviate.client\n text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n docs = text_splitter.split_documents(document)\n\n embeddings = OpenAIEmbeddings()\n return Weaviate.add_documents(docs, embeddings, client=client, weaviate_url=EnvVect.Host, by_text=False, index_name=\"LangChain\") \n# using this instead of from_documents since I don't want to initialize a new vectorstore \n\n# Some more logic to save the doc to another database\n\nWhenever I try to run the code it breaks during the Weaviate.add_documents() function prompting the following error:\n'tuple' object has no attribute 'page_content'.\nI tried to check the type of docs, but that doesn't seem wrong since it returns a List[Document] which is the same type the function accepts.\nHow can I make it work? I kind of followed this approach but the difference is I am loading files such as PDF, txt etc."} +{"id": "000306", "text": "Recently, while developing a Streamlit app, the app frequently crashes and requires manual rebooting.\nAfter spending some time, I identified the issue as \"exceeding RAM\". The free version of RAM is only 1GB, and my app easily surpasses this limit when multiple users are using it simultaneously.\nApplication of the App\n\nUsing langchain to build a document GPT.\nUsers upload PDFs and start asking questions.\n\nMain Problematic Code\nComplete code from Github\napp.py\nmodel = None\n\ndoc_container = st.container()\nwith doc_container:\n # when user upload pdf base on upload_and_process_pdf()\n # the create_doc_gpt() can execute successfully\n docs = upload_and_process_pdf()\n model = create_doc_gpt(docs)\n del docs\n st.write('---')\n\ndef create_doc_gpt(docs):\n if not docs:\n return\n\n ... instance docGPT which will use HuggingFaceEmbedding\n\nWhat I've Tried\nI attempted to identify where the issue in the code lies and whether optimization is possible. I conducted the following experiments:\n\nUsed Windows Task Manager's detailed view.\n\nExecuted the app (streamlit run app.py) and simultaneously identified its PID, observing memory usage.\n\nWhen opening the app, memory usage occupied 150,000 KB.\n\nBased on the simplified code above, after uploading a PDF, the docGPT instance (my model) is instantiated. At this point, memory rapidly spikes to 1,000,000 KB. I suspect this is due to HuggingFaceEmbedding causing this. (When I switched to a lighter embedding, memory decreased significantly)\n\nSince memory's main source is the model instance, but when I re-upload the same PDF, memory increases again to 1,750,000 KB. This seems like two models are occupying memory.\n\nAdditionally, I have attempted to repeatedly upload the same PDF on my app. After uploading the 8000KB file approximately 4 times, the app crashes.\n\n\nQuestion\nHow should I correctly release the initially instantiated model?\nIf I use st.cache_resource to decorate create_doc_gpt(docs), I have a few points of confusion as follows:\n\nWhen the same user uploads the first PDF, the embedding is performed, and the model is returned. At this point, does the app create a cache and occupy memory? If the user uploads a new PDF again, will the app go through embedding and returning the model, creating a new cache and occupying memory again?\n\nIf the assumption in #1 is correct, can I use the ttl and max_entries parameters to avoid excessive caching?\n\nIf the assumptions in #1 and #2 are correct, when there are two users simultaneously, and my max_entries is set to 2, will the cached models they create be counted separately?\n\n\n\nI'm unsure if this type of question is appropriate to ask here. If it's against the rules, I'm willing to delete the post and seek help elsewhere."} +{"id": "000307", "text": "I currently trying to implement langchain functionality to talk with pdf documents.\nI have a bunch of pdf files stored in Azure Blob Storage. I am trying to use langchain PyPDFLoader to load the pdf files to the Azure ML notebook. However, I am not being able to get it done. If I have the pdf stored locally, it is no problem, but to scale up I have to connect to the blob store. I have not really found any documents on langchain website or azure website. Wondering, if any of you is having similar problem.\nThank you\nBelow is an example of code i am trying:\nfrom azureml.fsspec import AzureMachineLearningFileSystem\nfs = AzureMachineLearningFileSystem(\"\")\n\nfrom langchain.document_loaders import PyPDFLoader\nwith fs.open('*/.../file.pdf', 'rb') as fd:\n loader = PyPDFLoader(document)\n data = loader.load()\n\nError: TypeError: expected str, bytes or os.PathLike object, not StreamInfoFileObject\n\nAnother example tried:\nfrom langchain.document_loaders import UnstructuredFileLoader\nwith fs.open('*/.../file.pdf', 'rb') as fd:\n loader = UnstructuredFileLoader(fd)\ndocuments = loader.load() \n\nError: TypeError: expected str, bytes or os.PathLike object, not StreamInfoFileObject"} +{"id": "000308", "text": "import streamlit as st\nimport os\nimport tempfile\nfrom pathlib import Path\nfrom pydantic import BaseModel, Field\nimport streamlit as st\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.agents import Tool\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.text_splitter import CharacterTextSplitter\nfrom langchain.vectorstores import FAISS\nfrom langchain.document_loaders import PyPDFLoader\nfrom langchain.chains import RetrievalQA\nfrom langchain.agents import initialize_agent\nimport openai\nos.environ[\"OPENAI_API_KEY\"] = \"\"\nos.environ['OPENAI_API_TYPE'] = 'azure'\nos.environ['OPENAI_API_VERSION'] = '2023-03-15-preview'\nos.environ['OPENAI_API_BASE'] = \"https://summarization\"\n\n#API settings for embedding\nopenai.api_type = \"azure\"\nopenai.api_base = \"https://summarization\"\nopenai.api_version = '2023-03-15-'\nopenai.api_key = \"\"\n \n \nclass DocumentInput(BaseModel):\n question: str = Field()\n\n# Create a temporary directory in the script's folder\nscript_dir = Path(__file__).resolve().parent\ntemp_dir = os.path.join(script_dir, \"tempDir\")\n\n\ndef main():\n st.title(\"PDF Document Comparison\")\n\n # Create a form to upload PDF files and enter a question\n st.write(\"Upload the first PDF file:\")\n pdf1 = st.file_uploader(\"Choose a PDF file\", type=[\"pdf\"], key=\"pdf1\")\n\n st.write(\"Upload the second PDF file:\")\n pdf2 = st.file_uploader(\"Choose a PDF file\", type=[\"pdf\"], key=\"pdf2\")\n\n question = st.text_input(\"Enter your question\")\n submit_button = st.button(\"Compare PDFs\")\n\n if submit_button:\n if pdf1 and pdf2:\n if not os.path.exists(temp_dir):\n os.makedirs(temp_dir)\n else:\n # Clear the previous contents of the \"tempDir\" folder\n for file in os.listdir(temp_dir):\n file_path = os.path.join(temp_dir, file)\n try:\n if os.path.isfile(file_path):\n os.unlink(file_path)\n except Exception as e:\n print(f\"Error deleting file: {e}\")\n\n # Save the PDF files to the \"tempDir\" directory\n pdf1_path = os.path.join(temp_dir, pdf1.name)\n with open(pdf1_path, 'wb') as f:\n f.write(pdf1.getbuffer())\n\n pdf2_path = os.path.join(temp_dir, pdf2.name)\n with open(pdf2_path, 'wb') as f:\n f.write(pdf2.getbuffer())\n\n\n\n llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\",engine=\"gpt-35-turbo\")\n\n tools = []\n files = [\n\n {\n \"name\": pdf1.name,\n \"path\": pdf1_path,\n },\n\n {\n \"name\": pdf2.name,\n \"path\": pdf2_path,\n },\n ]\n\n for file in files:\n loader = PyPDFLoader(file[\"path\"])\n pages = loader.load_and_split()\n text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n docs = text_splitter.split_documents(pages)\n embeddings = OpenAIEmbeddings()\n retriever = FAISS.from_documents(docs, embeddings).as_retriever()\n\n # Wrap retrievers in a Tool\n tools.append(\n Tool(\n args_schema=DocumentInput,\n name=file[\"name\"],\n description=f\"useful when you want to answer questions about {file['name']}\",\n func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever),\n )\n )\n agent = initialize_agent(\n tools=tools,\n llm=llm,\n verbose=True,\n )\n\n st.write(agent({\"input\": question}))\n # Now you have both PDFs saved in the \"tempDir\" folder\n # You can perform your PDF comparison here\n\n\nif __name__ == \"__main__\":\n main()\n\nI get the following error :\npydantic.v1.error_wrappers.ValidationError: 1 validation error for Tool\nargs_schema\nsubclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel) I am following the example from langchain documentation:https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit"} +{"id": "000309", "text": "from langchain.document_loaders import TextLoader\n# Create the TextLoader object using the file path\nLoader = tl('data.txt')\n\nI want to use a langchain with a string instead of a txt file, is this possible?\ndef get_response(query):\n #print(query)\n result = index.query(query)\n result = str(result)"} +{"id": "000310", "text": "I am trying to initialise a langchain with a pre-trained custom model of my own rather than use one of Google's base models. When I run the code with text-bison, it works as expected but when I run it with my own model, I get the following exception:\n_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:\n status = StatusCode.INVALID_ARGUMENT\n details = \"Request contains an invalid argument.\"\n debug_error_string = \"UNKNOWN:Error received from peer ipv4:x.x.x.x:443 {grpc_message:\"Request contains an invalid argument.\", grpc_status:3, created_time:\"2023-09-07T06:14:11.519783678+00:00\"}\"\n\nMy custom model is supervised trained on text-bison. Here is my code:\n#llm = VertexAI(model_name=\"text-bison@001\", max_output_tokens=1024)\nllm = VertexAI(model_name=\"projects/xxx/locations/us-central1/models/xxx\", max_output_tokens=1024)\nconversation_buf = ConversationChain(\n llm=llm,\n memory=ConversationBufferMemory()\n)"} +{"id": "000311", "text": "I am following this tutorial from langchain official documentation here were I try to track the number of tokens while usage. However, I wanted to use gpt-3.5-turbo instead of text-davinci-003 so I changed the LLM class used from OpenAI to ChatOpenAI but this a Value Error of unsupported message type\nHere is the code snippet:\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.callbacks import get_openai_callback\n\nos.environ['OPENAI_API_KEY'] = \"OPENAI-API-KEY\"\n\nllm = ChatOpenAI(\n model_name='gpt-3.5-turbo-16k',\n temperature=0.0\n)\n\nwith get_openai_callback() as cb:\n result = llm(\"Tell me a joke\")\n print(cb)\n\nGetting this error:\nValueError: Got unsupported message type: T\nWhy changing the class from OpenAI to ChatOpenAI gives this error? How to solve?"} +{"id": "000312", "text": "I am trying to deploy a Llama 2 model for text generation inference using Sagemaker and LangChain. I am writing code in Notebook instances and deploying SageMaker instances from the code.\nI followed the documentation from https://python.langchain.com/docs/integrations/llms/sagemaker. I used the following code to create a chain for question answering:\nfrom langchain.docstore.document import Document\nexample_doc_1 = \"\"\"\nPeter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.\nSince she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.\nTherefore, Peter stayed with her at the hospital for 3 days without leaving.\n\"\"\"\n\ndocs = [\n Document(\n page_content=example_doc_1,\n )\n]\n\nfrom typing import Dict\n\nfrom langchain import PromptTemplate, SagemakerEndpoint\nfrom langchain.llms.sagemaker_endpoint import LLMContentHandler\nfrom langchain.chains.question_answering import load_qa_chain\nimport json\n\nquery = \"\"\"How long was Elizabeth hospitalized?\n\"\"\"\n\nprompt_template = \"\"\"Use the following pieces of context to answer the question at the end.\n\n{context}\n\nQuestion: {question}\nAnswer:\"\"\"\nPROMPT = PromptTemplate(\n template=prompt_template, input_variables=[\"context\", \"question\"]\n)\n\n\nclass ContentHandler(LLMContentHandler):\n content_type = \"application/json\"\n accepts = \"application/json\"\n\n def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n input_str = json.dumps({prompt: prompt, **model_kwargs})\n return input_str.encode(\"utf-8\")\n\n def transform_output(self, output: bytes) -> str:\n response_json = json.loads(output.read().decode(\"utf-8\"))\n return response_json[0][\"generated_text\"]\n\n\ncontent_handler = ContentHandler()\n\nchain = load_qa_chain(\n llm=SagemakerEndpoint(\n endpoint_name=\"XYZ\",\n credentials_profile_name=\"XYZ\",\n region_name=\"XYZ\",\n model_kwargs={\"temperature\": 1e-10},\n content_handler=content_handler,\n ),\n prompt=PROMPT,\n)\n\nchain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)\n\nBut I got an error\nValueError: Error raised by inference endpoint: \nAn error occurred (ModelError) when calling the InvokeEndpoint operation: \nReceived client error (422) from primary with message \n\"Failed to deserialize the JSON body into the target type: missing field `inputs` at line 1 column 966\".\n\nIn multiple tutorials there isn't any inputs field. I have no idea if they updated the documentation, which I have been referring to but can't resolve this problem.\nMy question is:\n\nWhy am I getting this error and how can I fix it?\nWhat am I missing in my code or configuration?\nAny help or guidance would be appreciated. Thanks in advance."} +{"id": "000313", "text": "Currently I have managed to make a web interface to chat with a single PDF document using langchain as a framework, OpenAI as an LLM and Pinecone as a vector store. However, when I wanted to introduce new documents (5 new documents) PDF to the vecotres store, I realized that the information is different from the first document.\nI have thought about introducing the resulting embeddings of all the pdf documents to Pinecone. But I have a doubt about whether the information can be crossed when specific information is requested from only one PDF document.\nSo I'm thinking that another way could be to add some selectors in the same web interface so that the user can choose from the PDF they want to obtain answers from. and thus the information is directed to the specific PDF. But perhaps the user's interaction with the web interface would not be so automatic.\nThis is why I want to find a way to send all pdf documents to pinecone, and perhaps in the vector store itself add an index for each document or add more collections. I appreciate if anyone has worked on something similar and can give me advice to continue with my task."} +{"id": "000314", "text": "With the\n docs = (db.similarity_search(query='some query here'))\nmethod to output single or multiple documents of the deeplake vectorstore. Is there a method to output all documents?\nBecause my documents are structured like this:\npage_content='256 128 256zM208 160c-8,836 0-16-...\n384C234.5 384 256 362.5 256 336C256 309.5 234.5 288 208' \nmetadata={'source':'chatbot/app/solid.min.js','file_name':'solid.min.js'}\n\nAnd I would genre all documents whose metadata.file_name corresponds to a particular file. Unfortunately I can't find any recordings for this and that's why I'm asking here for experience."} +{"id": "000315", "text": "I have this requirement, where i want to create a knowledge retriver which will call the API to get the closest matching information, I know that we have these integrations in langchain with multiple vector stores, but we have requirement were we have to call the API to find the closest matching document how can we create our custom retriver in langchain which will call this API to get the nearest matching informtaion\nI'm trying to build the custom retriver in langchain but still not able figure it out"} +{"id": "000316", "text": "As shown in LangChain Quickstart, I am trying the following Python code:\nfrom langchain.prompts.chat import ChatPromptTemplate\ntemplate = \"You are a helpful assistant that translates {input_language} to {output_language}.\"\nhuman_template = \"{text}\"\n\nchat_prompt = ChatPromptTemplate.from_messages([\n (\"system\", template),\n (\"human\", human_template),\n])\n\nchat_prompt.format_messages(input_language=\"English\", output_language=\"French\", text=\"I love programming.\")\n\nBut when I run the above code, I get the following error:\nTraceback (most recent call last):\n File \"/home/yser364/Projets/SinappsIrdOpenaiQA/promptWorkout.py\", line 6, in \n chat_prompt = ChatPromptTemplate.from_messages([\n File \"/home/yser364/.local/lib/python3.10/site-packages/langchain/prompts/chat.py\", line 220, in from_messages\n return cls(input_variables=list(input_vars), messages=messages)\n File \"/home/yser364/.local/lib/python3.10/site-packages/langchain/load/serializable.py\", line 64, in __init__\n super().__init__(**kwargs)\n File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\n pydantic.error_wrappers.ValidationError: 4 validation errors for ChatPromptTemplate\n messages -> 0\n value is not a valid dict (type=type_error.dict)\n messages -> 0\n value is not a valid dict (type=type_error.dict)\n messages -> 1\n value is not a valid dict (type=type_error.dict)\n messages -> 1\n value is not a valid dict (type=type_error.dict)\n\nI use Python 3.10.12."} +{"id": "000317", "text": "I am making a chatbot which accesses an external knowledge base docs. I want to get the relevant documents the bot accessed for its answer, but this shouldn't be the case when the user input is something like \"hello\", \"how are you\", \"what's 2+2\", or any answer that is not retrieved from the external knowledge base docs. In this case, I want\nretriever.get_relevant_documents(query) or any other line to return an empty list or something similar.\nimport os\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.vectorstores import FAISS\nfrom langchain.chains import ConversationalRetrievalChain \nfrom langchain.memory import ConversationBufferMemory\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import PromptTemplate\n\nos.environ['OPENAI_API_KEY'] = ''\n\ncustom_template = \"\"\"\nThis is conversation with a human. Answer the questions you get based on the knowledge you have.\nIf you don't know the answer, just say that you don't, don't try to make up an answer.\nChat History:\n{chat_history}\nFollow Up Input: {question}\n\"\"\"\nCUSTOM_QUESTION_PROMPT = PromptTemplate.from_template(custom_template)\n\nllm = ChatOpenAI(\n model_name=\"gpt-3.5-turbo\", # Name of the language model\n temperature=0 # Parameter that controls the randomness of the generated responses\n)\n\nembeddings = OpenAIEmbeddings()\n\ndocs = [\n \"Buildings are made out of brick\",\n \"Buildings are made out of wood\",\n \"Buildings are made out of stone\",\n \"Buildings are made out of atoms\",\n \"Buildings are made out of building materials\",\n \"Cars are made out of metal\",\n \"Cars are made out of plastic\",\n ]\n\nvectorstore = FAISS.from_texts(docs, embeddings)\n\nretriever = vectorstore.as_retriever()\n\nmemory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n\nqa = ConversationalRetrievalChain.from_llm(\n llm,\n retriever,\n condense_question_prompt=CUSTOM_QUESTION_PROMPT,\n memory=memory\n)\n\nquery = \"what are cars made of?\"\nresult = qa({\"question\": query})\nprint(result)\nprint(retriever.get_relevant_documents(query))\n\nI tried setting a threshold for the retriever but I still get relevant documents with high similarity scores. And in other user prompts where there is a relevant document, I do not get back any relevant documents.\nretriever = vectorstore.as_retriever(search_type=\"similarity_score_threshold\", search_kwargs={\"score_threshold\": .9})"} +{"id": "000318", "text": "I have the following LangChain code that checks the chroma vectorstore and extracts the answers from the stored docs - how do I incorporate a Prompt template to create some context , such as the following:\nsales_template = \"\"\"You are customer services and you need to help people.\n{context}\nQuestion: {question}\"\"\"\nSALES_PROMPT = PromptTemplate(\n template=sales_template, input_variables=[\"context\", \"question\"]\n)\n\nHow do I incorporate the above into the below?\n#Embedding Text Using Langchain\nfrom langchain.embeddings import SentenceTransformerEmbeddings\nembeddings = SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n\n# Creating Vector Store with Chroma DB\nfrom langchain.vectorstores import Chroma\n#db = Chroma.from_documents(docs, embeddings)\ndb = Chroma(persist_directory=\"./chroma_db\", embedding_function=embeddings)\n# docs = db3.similarity_search(query)\n# print(docs[0].page_content)\n\n#Using OpenAI Large Language Models (LLM) with Chroma DB\nimport os\nos.environ[\"OPENAI_API_KEY\"] = 'sk-12345678910'\n\nfrom langchain.chat_models import ChatOpenAI\nmodel_name = \"gpt-3.5-turbo\"\nllm = ChatOpenAI(model_name=model_name)\n\n#Extracting Answers from Documents\n\nfrom langchain.chains.question_answering import load_qa_chain\nchain = load_qa_chain(llm, chain_type=\"stuff\",verbose=True)\n\nquery = \"What does Neil do for work?\"\nmatching_docs = db.similarity_search(query)\nanswer = chain.run(input_documents=matching_docs, question=query)\nprint(answer)"} +{"id": "000319", "text": "I am trying to use a custom embedding model in Langchain with chromaDB. I can't seem to find a way to use the base embedding class without having to use some other provider (like OpenAIEmbeddings or HuggingFaceEmbeddings). Am I missing something?\nOn the Langchain page it says that the base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. so I figured there must be a way to create another class on top of this class and overwrite/implement those methods with our own methods. But how do I do that?\nI tried to somehow use the base embeddings class but am unable to create a new embedding object/class on top of it."} +{"id": "000320", "text": "I Am trying to download a PDF file from a GCS storage bucket and read the content into memory.\nWhen using Langchain with python, i can just use the GCSDirectoryLoader to read all the files in a bucket and the pdf text.\nLangchain for NodeJs doesnt have GCSDirectoryLoader or a webloader for PDF files.\nWhen downloading the file, i get a Document with the binary representation as content.\nWhat is the best way to download pdf content from GCS bucket into memory?"} +{"id": "000321", "text": "I am retrieving the results from my internal Db but for this example I have added an open URL. I am using Azure openai and langchain in conjunction to build this retrieval engine. I checked in the Azure Portal that deployment is successful and i am able to run in a stand alone prompt.\nThe last query throws this error:\n\nInvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a .\n\nSince we can see I have already supplied a deployment_id above. What am I missing?\nHere is the entire code\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain.llms import OpenAI\nfrom langchain.chains import RetrievalQA\nfrom langchain.document_loaders import TextLoader\nfrom langchain.document_loaders import DirectoryLoader \nloader= TextLoader(r'./RawText.txt', encoding='utf-8')\ndocuments = loader.load() \ntext_splitter= RecursiveCharacterTextSplitter(chunk_size= 70, chunk_overlap=0)\ntexts= text_splitter.split_documents(documents)\n\n#create the DB\n persist_directory = 'db'\n#embedding= OpenAIEmbeddings()\n embeddings = OpenAIEmbeddings(deployment=\"text-embedding-ada-002\",model=\"text-embedding-ada-002\", chunk_size = 1)\n vectordb= Chroma.from_documents(documents=texts,\n embedding=embeddings,\n persist_directory=persist_directory)\n\nPost this I am creating a retriever as below:\n vectordb= Chroma(persist_directory=persist_directory, embedding_function=embeddings)\n retriever= vectordb.as_retriever()\n docs=retriever.get_relevant_documents(\"Databricks\")\n\n#Creating a Chain:\n from langchain.chains import RetrievalQA\n import openai\n\n#Specify the name of the engine you want to use\nengine = \"test_chat\"\nqa_chain=RetrievalQA.from_chain_type(llm=OpenAI(),\n chain_type=\"stuff\",\n retriever= retriever,\n return_source_documents=True)\n\n#test_chat here for reference is text-embedding-ada-002\n#Cite Source\ndef process_llm_responses(llm_response):\nprint(llm_response['result'])\nprint('\\n\\nSources:')\nfor source in llm_response[\"source_documents\"]:\n print(source.metadata[\"source\"])\n\n#full retrieval in process\nquery = \"What is A medallion architecture\"\nllm_response= qa_chain(query)\nprocess_llm_responses"} +{"id": "000322", "text": "I'm trying to create an embedding vector database with some .txt documents in my local folder. In particular I'm following this tutorial from the official page of LangChain: LangChain - Azure Cognitive Search and Azure OpenAI.\nI have followed all the steps of the tutorial and this is my Python script:\n# From https://python.langchain.com/docs/integrations/vectorstores/azuresearch\n\n\nimport openai\nimport os\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores.azuresearch import AzureSearch\n\n\nos.environ[\"OPENAI_API_TYPE\"] = \"azure\"\nos.environ[\"OPENAI_API_BASE\"] = \"https://xxxxxx.openai.azure.com\"\nos.environ[\"OPENAI_API_KEY\"] = \"xxxxxxxxx\"\nos.environ[\"OPENAI_API_VERSION\"] = \"2023-05-15\"\n\nmodel: str = \"text-embedding-ada-002\"\n\n\nvector_store_address: str = \"https://xxxxxxx.search.windows.net\"\nvector_store_password: str = \"xxxxxxx\"\n\n\n\nembeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)\nindex_name: str = \"cognitive-search-openai-exercise-index\"\nvector_store: AzureSearch = AzureSearch(\n azure_search_endpoint=vector_store_address,\n azure_search_key=vector_store_password,\n index_name=index_name,\n embedding_function=embeddings.embed_query,\n)\n\n\nfrom langchain.document_loaders import TextLoader\nfrom langchain.text_splitter import CharacterTextSplitter\n\nloader = TextLoader(\"C:/Users/xxxxxxxx/azure_openai_cognitive_search_exercise/data/qna/a.txt\", encoding=\"utf-8\")\n\ndocuments = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\ndocs = text_splitter.split_documents(documents)\n\nvector_store.add_documents(documents=docs)\n\n\n\n\n# Perform a similarity search\ndocs = vector_store.similarity_search(\n query=\"Who is Pippo Franco?\",\n k=3,\n search_type=\"similarity\",\n)\nprint(docs[0].page_content)\n\nNow, when I run the script I get the following error:\n\n\nvector_search_configuration is not a known attribute of class and will be ignored\nalgorithm_configurations is not a known attribute of class and will be ignored\nTraceback (most recent call last):\n File \"C:\\Users\\xxxxxxxxx\\venv\\Lib\\site-packages\\langchain\\vectorstores\\azuresearch.py\", line 105, in _get_search_client\n index_client.get_index(name=index_name)\n File \"C:\\Users\\xxxxxxx\\venv\\Lib\\site-packages\\azure\\core\\tracing\\decorator.py\", line 78, in wrapper_use_tracer\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxxx\\KYF\\venv\\Lib\\site-packages\\azure\\search\\documents\\indexes\\_search_index_client.py\", line 145, in get_index\n result = self._client.indexes.get(name, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxx\\venv\\Lib\\site-packages\\azure\\core\\tracing\\decorator.py\", line 78, in wrapper_use_tracer\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxx\\KYF\\venv\\Lib\\site-packages\\azure\\search\\documents\\indexes\\_generated\\operations\\_indexes_operations.py\", \nline 864, in get\n map_error(status_code=response.status_code, response=response, error_map=error_map)\n File \"C:\\Users\\xxxxxxxx\\venv\\Lib\\site-packages\\azure\\core\\exceptions.py\", line 165, in map_error\n raise error\nazure.core.exceptions.ResourceNotFoundError: () No index with the name 'cognitive-search-openai-exercise-index' was found in the service 'cognitive-search-openai-exercise'.\nCode:\nMessage: No index with the name 'cognitive-search-openai-exercise-index' was found in the service 'cognitive-search-openai-exercise'. \n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"c:\\Users\\xxxxxxx\\venv\\azure_openai_cognitive_search_exercise\\test.py\", line 25, in \n vector_store: AzureSearch = AzureSearch(\n ^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxxx\\venv\\Lib\\site-packages\\langchain\\vectorstores\\azuresearch.py\", line 237, in __init__\n self.client = _get_search_client(\n ^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxxxx\\venv\\Lib\\site-packages\\langchain\\vectorstores\\azuresearch.py\", line 172, in _get_search_client \n index_client.create_index(index)\n File \"C:\\Users\\xxxxxxx\\venv\\Lib\\site-packages\\azure\\core\\tracing\\decorator.py\", line 78, in wrapper_use_tracer\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxxx\\venv\\Lib\\site-packages\\azure\\search\\documents\\indexes\\_search_index_client.py\", line 220, in create_index\n result = self._client.indexes.create(patched_index, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxxx\\venv\\Lib\\site-packages\\azure\\core\\tracing\\decorator.py\", line 78, in wrapper_use_tracer\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\xxxxxx\\venv\\Lib\\site-packages\\azure\\search\\documents\\indexes\\_generated\\operations\\_indexes_operations.py\", \nline 402, in create\n raise HttpResponseError(response=response, model=error)\nazure.core.exceptions.HttpResponseError: (InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.\nCode: InvalidRequestParameter\nMessage: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.\nException Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition\n Code: InvalidField\n Message: The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition\n\n\n\nI have created an index manually from the Azure Cognitive Search Console, but I don't think this is the correct approach, as the script should automatically create a new index."} +{"id": "000323", "text": "I want to parallelize RetrievalQA with asyncio but I am unable to figure out how.\nThis is how my code works serially:\nimport langchain\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain import PromptTemplate, LLMChain\nfrom langchain.chains import RetrievalQA\nfrom langchain.vectorstores import FAISS\nfrom langchain.schema.vectorstore import VectorStoreRetriever\nimport asyncio\nimport nest_asyncio\n\nretriever = VectorStoreRetriever(vectorstore=FAISS(...))\n\nchat = ChatOpenAI(model=\"gpt-3.5-turbo-16k\", temperature=0.7)\n\nqa_chain = RetrievalQA.from_llm(chat, retriever= retriever\n #,memory=memory\n , return_source_documents=True\n )\n\nqueries = ['query1', 'query2', 'query3']\ndata_to_append = []\n\nfor query in queries :\n\n vectordbkwargs = {\"search_distance\": 0.9}\n result = qa_chain({\"query\": query, \"vectordbkwargs\": vectordbkwargs})\n\n data_to_append.append({\"Query\": query, \"Source_Documents\": result[\"source_documents\"], \"Generated_Text\": result[\"result\"]})\n\n\nHere was my attempt to parallelize it with asyncio but RetrievalQA doesn't seem to work async:\nimport langchain\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain import PromptTemplate, LLMChain\nfrom langchain.chains import RetrievalQA\nfrom langchain.vectorstores import FAISS\nfrom langchain.schema.vectorstore import VectorStoreRetriever\nimport asyncio\nimport nest_asyncio\n\nretriever = VectorStoreRetriever(vectorstore=FAISS(...))\n\nchat = ChatOpenAI(model=\"gpt-3.5-turbo-16k\", temperature=0.7)\n\n\nqa_chain = RetrievalQA.from_llm(chat, retriever= retriever\n , return_source_documents=True\n )\n\nqueries = ['query1', 'query2', 'query3']\ndata_to_append = []\n\n\n\nasync def process_query(query):\n\n vectordbkwargs = {\"search_distance\": 0.9}\n result = await qa_chain({\"query\": query, \"vectordbkwargs\": vectordbkwargs})\n data_to_append.append({\"Query\": query, \"Source_Documents\": result[\"source_documents\"], \"Generated_Text\": result[\"result\"]})\n\n\nasync def main():\n\n tasks = []\n\n for query in queries: # Iterate all rows\n task = process_query(query)\n tasks.append(task)\n\n await asyncio.gather(*tasks)\n\nif __name__ == \"__main__\":\n nest_asyncio.apply()\n asyncio.run(main())\n\nAny help would be greatly appreciated."} +{"id": "000324", "text": "I am currently working on a project where I have implemented the ConversationalRetrievalQAChain, with the option \"returnSourceDocuments\" set to true. The system works perfectly when I ask specific questions related to the VectorStore database, as it returns matching sources. However, when I pose generic questions like \"Is the earth round?\" or \"How are you today?\" it returns unrelated sources that don't align with the query. It appears that the vector store always returns documents, even if they don't match the query.\nI'm seeking guidance on how to enhance the relevance of the source documents retrieved by the Langchain ConversationalRetrievalQAChain. Are there specific tools or techniques within the Langchain framework that can help mitigate this behavior, or is it necessary to develop a manual process to assess document relevance? How can I effectively limit the retrieval of unrelated source documents in this scenario?\nHere is relevant part of the code:\nasync init(): Promise {\n try {\n this.driver = await this.getQdrantDriver()\n this.retriever = await this.createRetrieverFromDriver()\n this.chat = new ChatOpenAI({ modelName: aiConfig.modelName })\n this.chain = await this.createQAChain(this.chat)\n\n this.questionGenerationChain = await this.createQuestionGenerationChain()\n this.conversation = new ConversationalRetrievalQAChain({\n retriever: this.retriever,\n combineDocumentsChain: this.chain,\n questionGeneratorChain: this.questionGenerationChain,\n returnSourceDocuments: true,\n })\n } catch (error) {\n Logger.error(error.message)\n throw error\n }\n }\n\n private async createQuestionGenerationChain(): Promise {\n const { default: Prompt } = await import('App/Models/Prompt')\n return new LLMChain({\n llm: this.chat,\n prompt: await Prompt.fetchCondensePrompt(),\n })\n }\n\n private async createRetrieverFromDriver(): Promise> {\n return this.driver.asRetriever(qdrantConfig.noResults ?? 5)\n }\n\n private async getQdrantDriver(embeddings = new OpenAIEmbeddings(), collectionName: string | null = null): Promise {\n const { default: Ingest } = await import('App/Models/Ingest')\n return new QdrantVectorStore(\n embeddings,\n {\n url: qdrantConfig.qdrantUrl,\n collectionName: collectionName ?? await Ingest.lastCollection(),\n },\n )\n }\n\n private async createQAChain(chat: BaseLanguageModel): Promise {\n const { default: Prompt } = await import('App/Models/Prompt')\n return loadQAStuffChain(chat, {\n prompt: await Prompt.fetchQuestionPrompt(),\n })\n }"} +{"id": "000325", "text": "I'm attempted to pass draft documents and have my chatbot generate a template using a prompt create a non disclosure agreement draft for California between mike llc and fantasty world. with my code below the response i'm getting is:\n\"I'm sorry, but I cannot generate a non-disclosure agreement draft for you. However, you can use the provided context information as a template to create a non-disclosure agreement between Mike LLC and fantasty world. Just replace the placeholders in the template with the appropriate names and information for your specific agreement.\nHere is my setup:\nimport sys\nimport os\nimport openai\nimport constants\nimport gradio as gr\nfrom langchain.chat_models import ChatOpenAI\n\nfrom llama_index import SimpleDirectoryReader, GPTListIndex, GPTVectorStoreIndex, LLMPredictor, PromptHelper, load_index_from_storage\n\n# Disable SSL certificate verification (for debugging purposes)\nos.environ['REQUESTS_CA_BUNDLE'] = '' # Set it to an empty string\n\nos.environ[\"OPENAI_API_KEY\"] = constants.APIKEY\nopenai.api_key = os.getenv(\"OPENAI_API_KEY\")\nprint(os.getenv(\"OPENAI_API_KEY\"))\n\ndef createVecorIndex(path):\n max_input = 4096\n tokens = 512\n chunk_size = 600\n max_chunk_overlap = 0.1\n\n prompt_helper = PromptHelper(max_input, tokens, max_chunk_overlap, chunk_size_limit=chunk_size)\n\n #define llm\n llmPredictor = LLMPredictor(llm=ChatOpenAI(temperature=.7, model_name='gpt-3.5-turbo', max_tokens=tokens))\n\n #load data\n docs = SimpleDirectoryReader(path).load_data()\n\n #create vector index\n vectorIndex = GPTVectorStoreIndex(docs, llmpredictor=llmPredictor, prompt_helper=prompt_helper)\n vectorIndex.storage_context.persist(persist_dir='vectorIndex.json')\n\n return vectorIndex\n\nvectorIndex = createVecorIndex('docs')\n\nIn my docs directory, I have a few examples of non-disclosure agreements to create the vector index.\nThis was my first attempt at the query:\ndef chatbot(input_index):\n query_engine = vectorIndex.as_query_engine()\n response = query_engine.query(input_index)\n return response.response\n\ngr.Interface(fn=chatbot, inputs=\"text\", outputs=\"text\", title=\"Super Awesome Chatbot\").launch()\n\nI can't seem to get it to generate the draft, it keeps giving me the \"I cannot generate a draft\" response\nI also tried to create a clause for the word draft, but the setup below is essential useing the trained model instead my vector.\ndef chatbot(input_index):\n query_engine = vectorIndex.as_query_engine()\n\n # If the \"draft\" clause is active:\n if \"draft\" in input_index.lower():\n # Query the vectorIndex for relevant information/context\n vector_response = query_engine.query(input_index).response\n print(vector_response)\n # Use vector_response as context to query the OpenAI API for a draft\n prompt = f\"Based on the information: '{vector_response}', generate a draft for the input: {input_index}\"\n \n response = openai.Completion.create(\n engine=\"text-davinci-002\",\n prompt=prompt,\n max_tokens=512,\n temperature=0.2\n )\n \n openai_response = response.choices[0].text.strip()\n \n return openai_response\n\n # If \"draft\" clause isn't active, use just the vectorIndex response\n else:\n print('else clause')\n return query_engine.query(input_index).response"} +{"id": "000326", "text": "This is how I am defining the executor\nconst executor = await initializeAgentExecutorWithOptions(tools, model, {\n agentType: 'chat-conversational-react-description',\n verbose: false,\n});\n\nWhenever I prompt the AI I have this statement at the end.\ntype SomeObject = {\n field1: number,\n field2: number,\n}\n\n- It is very critical that you answer only as the above object and JSON stringify it as a single string.\n Don't include any other verbose explanatiouns and don't include the markdown syntax anywhere.\n\nThe SomeObject is just an example. Usually it will have a proper object type.\nWhen I use the executor to get a response from the AI, half the time I get the proper JSON string, but the other half the times are the AI completely ignoring my instructions and gives me a long verbose answer in just plain English...\nHow can I make sure I always get the structured data answer I want?\nMaybe using the agentType: 'chat-conversational-react-description' isn't the right approach here?"} +{"id": "000327", "text": "I am using Langchain with OpenAI API for getting the summary of PDF Files. Some of my PDFs have many pages (more than the max token allowed in ChatGPT). Im trying two approaches to reduce the tokens so that I can input longer texts, but is still not working for a 300 inch- PDF.\n\nRetrieval augmented generation: more specifically the text splitter\n\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 50)\n all_splits = text_splitter.split_documents(data)\n\n\nText summarisation: using stuff documents chain\n\n stuff_chain = StuffDocumentsChain(llm_chain=llm_chain, document_variable_name=\"text\")\n\nI would like to understand what is the text splitter doing because is not helping me to input longer text in the prompt. How can do this?"} +{"id": "000328", "text": "I'm using langchain with Azure OpenAI and Azure Cognitive Search.\nCurrently I'm using Azure OpenAI text-embedding-ada-002 model for generating embeddings, but I would like to use a embbeding model from HugginFace if possible, because Azure OpenAI API does not allow to send documents in batches, so I need to make several calls and hit the rate limit.\nI tried using this embbeding in my code:\nembeddings = SentenceTransformerEmbeddings(\n model_name=\"all-mpnet-base-v2\",\n )\n\nInstead of:\nembeddings = OpenAIEmbeddings(\n ...\n)\n\nThe problem I'm facing, is that when I use AzureSearch's aadd_texts method I get this error:\nThe vector field 'content_vector' dimensionality must match the field definition's 'dimensions' property. Expected: '1536'. Actual: '768'. (IndexDocumentsFieldError) 98: The vector field 'content_vector' dimensionality must match the field definition's 'dimensions' property. Expected: '1536'. Actual: '768'.\n Code: IndexDocumentsFieldError\n\nI'm pretty lost. Did anyone used an open source embeddings model with Cognitive Search? How?"} +{"id": "000329", "text": "The following code do not do what it is supposed to do:\nfrom langchain.callbacks.base import BaseCallbackHandler\nfrom langchain import PromptTemplate\nfrom langchain.chains import LLMChain\nfrom langchain.llms import VertexAI\n\n\nclass MyCustomHandler(BaseCallbackHandler):\n def on_llm_end(self, event, context):\n print(f\"Prompt: {event.prompt}\")\n print(f\"Response: {event.response}\")\n\n\nllm = VertexAI(\n model_name='text-bison@001',\n max_output_tokens=1024,\n temperature=0.3,\n verbose=False)\nprompt = PromptTemplate.from_template(\"1 + {number} = \")\nhandler = MyCustomHandler()\nchain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])\nresponse = chain.run(number=2)\nprint(response)\n\nBased on this documentation and this tutorial, the code should execute the custom handler callback on_llm_end but in fact it doesn't.\nCan anyone please tell me why?"} +{"id": "000330", "text": "I'm trying to test a chat agent using the python code below. I'm using langchain agent and tool from langchain. I'm defining a couple of simple functions for the LLM to use as tools when a prompt mentions something relevant to the tool. I'm using the openai gpt-3.5-turbo model for the LLM. I'm getting the error message below when trying to run conversational_agent with a simple prompt to return a random number. The function defined for the tool should do this easily. I'm getting the error message below mentioning that openai doesn't have the attribute. Does anyone see what the issue might be and can you suggest how to fix it?\ncode:\nfrom config import api_key,new_personal_api_key\n\napikey=new_personal_api_key\n\n# apikey=api_key\n\nimport os\n\nos.environ['OPENAI_API_KEY'] = apikey\n\n\nfrom langchain.chains.conversation.memory import ConversationBufferWindowMemory\n\n\nfrom langchain.agents import Tool\nfrom langchain.tools import BaseTool\n\ndef meaning_of_life(input=\"\"):\n return 'The meaning of life is 42 if rounded but is actually 42.17658'\n \nlife_tool = Tool(\n name='Meaning of Life',\n func= meaning_of_life,\n description=\"Useful for when you need to answer questions about the meaning of life. input should be MOL \"\n)\n\n\nimport random\n\ndef random_num(input=\"\"):\n return random.randint(0,5)\n \n \nrandom_tool = Tool(\n name='Random number',\n func= random_num,\n description=\"Useful for when you need to get a random number. input should be 'random'\"\n)\n\nfrom langchain import OpenAI \nfrom langchain.chat_models import ChatOpenAI\n\n# Set up the turbo LLM\nturbo_llm = ChatOpenAI(\n temperature=0,\n model_name='gpt-3.5-turbo'\n)\n\n\n\nfrom langchain.agents import initialize_agent\n\ntools = [random_tool, life_tool]\n\n# conversational agent memory\nmemory = ConversationBufferWindowMemory(\n memory_key='chat_history',\n k=3,\n return_messages=True\n)\n\n\n# create our agent\nconversational_agent = initialize_agent(\n agent='chat-conversational-react-description',\n tools=tools,\n llm=turbo_llm,\n# llm=local_llm,\n verbose=True,\n max_iterations=3,\n early_stopping_method='generate',\n memory=memory,\n handle_parsing_errors=True\n)\n\n\nconversational_agent('Can you give me a random number?')\n\nerror:\n> Entering new AgentExecutor chain...\n\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\nCell In[12], line 1\n----> 1 conversational_agent('Can you give me a random number?')\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\n 308 except BaseException as e:\n 309 run_manager.on_chain_error(e)\n--> 310 raise e\n 311 run_manager.on_chain_end(outputs)\n 312 final_outputs: Dict[str, Any] = self.prep_outputs(\n 313 inputs, outputs, return_only_outputs\n 314 )\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\n 297 run_manager = callback_manager.on_chain_start(\n 298 dumpd(self),\n 299 inputs,\n 300 name=run_name,\n 301 )\n 302 try:\n 303 outputs = (\n--> 304 self._call(inputs, run_manager=run_manager)\n 305 if new_arg_supported\n 306 else self._call(inputs)\n 307 )\n 308 except BaseException as e:\n 309 run_manager.on_chain_error(e)\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/agents/agent.py:1146, in AgentExecutor._call(self, inputs, run_manager)\n 1144 # We now enter the agent loop (until it returns something).\n 1145 while self._should_continue(iterations, time_elapsed):\n-> 1146 next_step_output = self._take_next_step(\n 1147 name_to_tool_map,\n 1148 color_mapping,\n 1149 inputs,\n 1150 intermediate_steps,\n 1151 run_manager=run_manager,\n 1152 )\n 1153 if isinstance(next_step_output, AgentFinish):\n 1154 return self._return(\n 1155 next_step_output, intermediate_steps, run_manager=run_manager\n 1156 )\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/agents/agent.py:933, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\n 930 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)\n 932 # Call the LLM to see what to do.\n--> 933 output = self.agent.plan(\n 934 intermediate_steps,\n 935 callbacks=run_manager.get_child() if run_manager else None,\n 936 **inputs,\n 937 )\n 938 except OutputParserException as e:\n 939 if isinstance(self.handle_parsing_errors, bool):\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/agents/agent.py:546, in Agent.plan(self, intermediate_steps, callbacks, **kwargs)\n 534 \"\"\"Given input, decided what to do.\n 535 \n 536 Args:\n (...)\n 543 Action specifying what tool to use.\n 544 \"\"\"\n 545 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)\n--> 546 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)\n 547 return self.output_parser.parse(full_output)\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chains/llm.py:298, in LLMChain.predict(self, callbacks, **kwargs)\n 283 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:\n 284 \"\"\"Format prompt with kwargs and pass to LLM.\n 285 \n 286 Args:\n (...)\n 296 completion = llm.predict(adjective=\"funny\")\n 297 \"\"\"\n--> 298 return self(kwargs, callbacks=callbacks)[self.output_key]\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\n 308 except BaseException as e:\n 309 run_manager.on_chain_error(e)\n--> 310 raise e\n 311 run_manager.on_chain_end(outputs)\n 312 final_outputs: Dict[str, Any] = self.prep_outputs(\n 313 inputs, outputs, return_only_outputs\n 314 )\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chains/base.py:304, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)\n 297 run_manager = callback_manager.on_chain_start(\n 298 dumpd(self),\n 299 inputs,\n 300 name=run_name,\n 301 )\n 302 try:\n 303 outputs = (\n--> 304 self._call(inputs, run_manager=run_manager)\n 305 if new_arg_supported\n 306 else self._call(inputs)\n 307 )\n 308 except BaseException as e:\n 309 run_manager.on_chain_error(e)\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chains/llm.py:108, in LLMChain._call(self, inputs, run_manager)\n 103 def _call(\n 104 self,\n 105 inputs: Dict[str, Any],\n 106 run_manager: Optional[CallbackManagerForChainRun] = None,\n 107 ) -> Dict[str, str]:\n--> 108 response = self.generate([inputs], run_manager=run_manager)\n 109 return self.create_outputs(response)[0]\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chains/llm.py:120, in LLMChain.generate(self, input_list, run_manager)\n 118 callbacks = run_manager.get_child() if run_manager else None\n 119 if isinstance(self.llm, BaseLanguageModel):\n--> 120 return self.llm.generate_prompt(\n 121 prompts,\n 122 stop,\n 123 callbacks=callbacks,\n 124 **self.llm_kwargs,\n 125 )\n 126 else:\n 127 results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(\n 128 cast(List, prompts), {\"callbacks\": callbacks}\n 129 )\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chat_models/base.py:459, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)\n 451 def generate_prompt(\n 452 self,\n 453 prompts: List[PromptValue],\n (...)\n 456 **kwargs: Any,\n 457 ) -> LLMResult:\n 458 prompt_messages = [p.to_messages() for p in prompts]\n--> 459 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chat_models/base.py:349, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)\n 347 if run_managers:\n 348 run_managers[i].on_llm_error(e)\n--> 349 raise e\n 350 flattened_outputs = [\n 351 LLMResult(generations=[res.generations], llm_output=res.llm_output)\n 352 for res in results\n 353 ]\n 354 llm_output = self._combine_llm_outputs([res.llm_output for res in results])\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chat_models/base.py:339, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)\n 336 for i, m in enumerate(messages):\n 337 try:\n 338 results.append(\n--> 339 self._generate_with_cache(\n 340 m,\n 341 stop=stop,\n 342 run_manager=run_managers[i] if run_managers else None,\n 343 **kwargs,\n 344 )\n 345 )\n 346 except BaseException as e:\n 347 if run_managers:\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chat_models/base.py:492, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)\n 488 raise ValueError(\n 489 \"Asked to cache, but no cache found at `langchain.cache`.\"\n 490 )\n 491 if new_arg_supported:\n--> 492 return self._generate(\n 493 messages, stop=stop, run_manager=run_manager, **kwargs\n 494 )\n 495 else:\n 496 return self._generate(messages, stop=stop, **kwargs)\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chat_models/openai.py:365, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)\n 363 message_dicts, params = self._create_message_dicts(messages, stop)\n 364 params = {**params, **kwargs}\n--> 365 response = self.completion_with_retry(\n 366 messages=message_dicts, run_manager=run_manager, **params\n 367 )\n 368 return self._create_chat_result(response)\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chat_models/openai.py:297, in ChatOpenAI.completion_with_retry(self, run_manager, **kwargs)\n 293 def completion_with_retry(\n 294 self, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any\n 295 ) -> Any:\n 296 \"\"\"Use tenacity to retry the completion call.\"\"\"\n--> 297 retry_decorator = _create_retry_decorator(self, run_manager=run_manager)\n 299 @retry_decorator\n 300 def _completion_with_retry(**kwargs: Any) -> Any:\n 301 return self.client.create(**kwargs)\n\nFile ~/anaconda3/envs/llm_110623/lib/python3.10/site-packages/langchain/chat_models/openai.py:77, in _create_retry_decorator(llm, run_manager)\n 68 def _create_retry_decorator(\n 69 llm: ChatOpenAI,\n 70 run_manager: Optional[\n 71 Union[AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun]\n 72 ] = None,\n 73 ) -> Callable[[Any], Any]:\n 74 import openai\n 76 errors = [\n---> 77 openai.error.Timeout,\n 78 openai.error.APIError,\n 79 openai.error.APIConnectionError,\n 80 openai.error.RateLimitError,\n 81 openai.error.ServiceUnavailableError,\n 82 ]\n 83 return create_base_retry_decorator(\n 84 error_types=errors, max_retries=llm.max_retries, run_manager=run_manager\n 85 )\n\nAttributeError: module 'openai' has no attribute 'error'"} +{"id": "000331", "text": "I am extracting text from pdf documents and load it to Azure Cognitive Search for a RAG approach. Unfortunately this does not work. I am receiving the error message\n\nAttributeError: 'str' object has no attribute 'page_content'\n\nWhat I want to do is\n\nExtract text from pdf via pymupdf - works\nUpload it to Azuer Vector search as embeddings with vectors and `filename``\nQuery this through ChatGPT model\n\nThis is my code:\n!pip install cohere tiktoken\n!pip install openai==0.28.1\n!pip install pymupdf\n!pip install azure-storage-blob azure-identity\n!pip install azure-search-documents --pre --upgrade\n!pip install langchain\n\nimport fitz\nimport time\nimport uuid\nimport os\nimport openai\n\nfrom PIL import Image\nfrom io import BytesIO\nfrom IPython.display import display\n\nfrom azure.identity import DefaultAzureCredential\nfrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient\n\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n\nfrom langchain.chat_models import AzureChatOpenAI\nfrom langchain.vectorstores import AzureSearch\nfrom langchain.document_loaders import DirectoryLoader\nfrom langchain.document_loaders import TextLoader\nfrom langchain.text_splitter import TokenTextSplitter\nfrom langchain.chains import ConversationalRetrievalChain\nfrom langchain.prompts import PromptTemplate\n\nfrom google.colab import drive\n\nOPENAI_API_BASE = \"https://xxx.openai.azure.com\"\nOPENAI_API_KEY = \"xxx\"\nOPENAI_API_VERSION = \"2023-05-15\"\n\nopenai.api_type = \"azure\"\nopenai.api_key = OPENAI_API_KEY\nopenai.api_base = OPENAI_API_BASE\nopenai.api_version = OPENAI_API_VERSION\n\nAZURE_COGNITIVE_SEARCH_SERVICE_NAME = \"https://xxx.search.windows.net\"\nAZURE_COGNITIVE_SEARCH_API_KEY = \"xxx\"\nAZURE_COGNITIVE_SEARCH_INDEX_NAME = \"test\"\n\nllm = AzureChatOpenAI(deployment_name=\"gpt35\", openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)\nembeddings = OpenAIEmbeddings(deployment_id=\"ada002\", chunk_size=1, openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)\n\nacs = AzureSearch(azure_search_endpoint=AZURE_COGNITIVE_SEARCH_SERVICE_NAME,\n azure_search_key = AZURE_COGNITIVE_SEARCH_API_KEY,\n index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME,\n embedding_function = embeddings.embed_query)\n \ndef generate_tokens(s):\n text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)\n splits = text_splitter.split_text(s)\n\n return splits\n\ndrive.mount('/content/drive')\nfolder = \"/content/drive/.../pdf/\"\n\npage_content = ''\ndoc_content = ''\n \nfor filename in os.listdir(folder):\n file_path = os.path.join(folder, filename)\n if os.path.isfile(file_path):\n print(f\"Processing file: {file_path}\")\n\n doc = fitz.open(file_path)\n for page in doc: # iterate the document pages\n page_content += page.get_text() # get plain text encoded as UTF-8\n doc_content += page_content\n\n d = generate_tokens(doc_content)\n\n # the following line throws the error\n # how can i add the chunks + filename to \n # Azure Cognitive Search?\n\n acs.add_documents(documents=d)\n \n print(metadatas)\n print(\"----------\")\n print(doc_content)\n count = len(doc_content.split())\n print(\"Number of tokens: \", count)\n\n\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n in ()\n 31 all_texts.extend(d)\n 32 \n---> 33 acs.add_documents(documents=d)\n 34 \n 35 metadatas = [{\"Source\": f\"{i}-pl\"} for i in range(len(all_texts))]\n\n1 frames\n/usr/local/lib/python3.10/dist-packages/langchain/schema/vectorstore.py in (.0)\n 118 \"\"\"\n 119 # TODO: Handle the case where the user doesn't provide ids on the Collection\n--> 120 texts = [doc.page_content for doc in documents]\n 121 metadatas = [doc.metadata for doc in documents]\n 122 return self.add_texts(texts, metadatas, **kwargs)\n\nAttributeError: 'str' object has no attribute 'page_content'"} +{"id": "000332", "text": "I am extracting text from pdf documents and load it to Azure Cognitive Search for a RAG approach. Unfortunately this does not work. I am receiving the error message\nHttpResponseError: () The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.\nCode: \nMessage: The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.\n\nWhat i want to do is\n\nExtract text from pdf via pymupdf - works\nUpload it to Azure Vector search as embeddings with vectors and metdata `filename``\nQuery this through ChatGPT model\n\nBeside the error i want to add to this document object the metadata information filename but also dont know how to extend this ...\nMy code:\n!pip install cohere tiktoken\n!pip install openai==0.28.1\n!pip install pymupdf\n!pip install azure-storage-blob azure-identity\n!pip install azure-search-documents --pre --upgrade\n!pip install langchain\n\nimport fitz\nimport time\nimport uuid\nimport os\nimport openai\n\nfrom PIL import Image\nfrom io import BytesIO\nfrom IPython.display import display\n\nfrom azure.identity import DefaultAzureCredential\nfrom azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient\n\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n\nfrom langchain.chat_models import AzureChatOpenAI\nfrom langchain.vectorstores import AzureSearch\nfrom langchain.docstore.document import Document\nfrom langchain.document_loaders import DirectoryLoader\nfrom langchain.document_loaders import TextLoader\nfrom langchain.text_splitter import TokenTextSplitter\nfrom langchain.chains import ConversationalRetrievalChain\nfrom langchain.prompts import PromptTemplate\n\nfrom google.colab import drive\n\nOPENAI_API_BASE = \"https://xxx.openai.azure.com\"\nOPENAI_API_KEY = \"xxx\"\nOPENAI_API_VERSION = \"2023-05-15\"\n\nopenai.api_type = \"azure\"\nopenai.api_key = OPENAI_API_KEY\nopenai.api_base = OPENAI_API_BASE\nopenai.api_version = OPENAI_API_VERSION\n\nAZURE_COGNITIVE_SEARCH_SERVICE_NAME = \"https://xxx.search.windows.net\"\nAZURE_COGNITIVE_SEARCH_API_KEY = \"xxx\"\nAZURE_COGNITIVE_SEARCH_INDEX_NAME = \"test\"\n\nllm = AzureChatOpenAI(deployment_name=\"gpt35\", openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)\nembeddings = OpenAIEmbeddings(deployment_id=\"ada002\", chunk_size=1, openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)\n\nacs = AzureSearch(azure_search_endpoint=AZURE_COGNITIVE_SEARCH_SERVICE_NAME,\n azure_search_key = AZURE_COGNITIVE_SEARCH_API_KEY,\n index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME,\n embedding_function = embeddings.embed_query)\n \ndef generate_tokens(s, f):\n text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)\n splits = text_splitter.split_text(s)\n i = 0\n\n documents = []\n for split in splits:\n metadata = {}\n metadata[\"index\"] = i\n metadata[\"file_source\"] = f\n i = i+1\n\n new_doc = Document(page_content=split, metadata=metadata)\n documents.append(new_doc)\n #documents = text_splitter.create_documents(splits)\n\n print (documents)\n\n return documents\n\n\ndrive.mount('/content/drive')\nfolder = \"/content/drive/.../pdf/\"\n\npage_content = ''\ndoc_content = ''\n \nfor filename in os.listdir(folder):\n file_path = os.path.join(folder, filename)\n if os.path.isfile(file_path):\n print(f\"Processing file: {file_path}\")\n\n doc = fitz.open(file_path)\n for page in doc: # iterate the document pages\n page_content += page.get_text() # get plain text encoded as UTF-8 \n d = generate_tokens(doc_content)\n\n # the following line throws the error\n # how can i add the chunks + filename to \n # Azure Cognitive Search?\n\n doc_content += page_content\n d = generate_tokens(doc_content, file_path)\n\n acs.add_documents(documents=d)\n \n print(metadatas)\n print(\"----------\")\n print(doc_content)\n count = len(doc_content.split())\n print(\"Number of tokens: \", count)\n\n\nHttpResponseError Traceback (most recent call last)\n in ()\n 31 all_texts.extend(d)\n 32 \n---> 33 acs.add_documents(documents=d)\n 34 \n 35 metadatas = [{\"Source\": f\"{i}-pl\"} for i in range(len(all_texts))]\n\n7 frames\n/usr/local/lib/python3.10/dist-packages/azure/search/documents/_generated/operations/_documents_operations.py in index(self, batch, request_options, **kwargs)\n 1249 map_error(status_code=response.status_code, response=response, error_map=error_map)\n 1250 error = self._deserialize.failsafe_deserialize(_models.SearchError, pipeline_response)\n-> 1251 raise HttpResponseError(response=response, model=error)\n 1252 \n 1253 if response.status_code == 200:\n\nHttpResponseError: () The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.\nCode: \nMessage: The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.\n\nThis is my index in Azure Cognitive Search index:"} +{"id": "000333", "text": "I'm working on an AI project but my current problem right now is that FAISS is taking far too long to load the documents. So Iv moved it into its own service via fastapi.\nEverything Looks ok, but when I run it I get the error of:\nid not find openai_api_key, please add an environment variable `OPENAI_API_KEY`\n\nIn my code:\nembeddings = OpenAIEmbeddings()\ndb = FAISS.from_documents(documents, embeddings)\n\nNow I am Using OpenAI but not in this service so i did not add my key.\nFrom my understanding its just taking text tokenizing it using openAI's token map, and then doing a search and finding the nearest related documents based on that query.\nThat, Technically does not actually reach out to Open AI servers does it?\nAfterwords i'm just adding the related documents to the prompt that I Send to Open AI's servers, So if its sending data to open AI twice that a tad inefficient right?\nHow can I get this to just be its own service? Or am I wasting my time here?"} +{"id": "000334", "text": "I'm trying to build a simple RAG, and I'm stuck at this code:\nfrom langchain.embeddings.huggingface import HuggingFaceEmbeddings\nfrom llama_index import LangchainEmbedding, ServiceContext\n\nembed_model = LangchainEmbedding(\n HuggingFaceEmbeddings(model_name=\"thenlper/gte-large\")\n)\nservice_context = ServiceContext.from_defaults(\n chunk_size=256,\n llm=llm,\n embed_model=embed_model\n)\nindex = VectorStoreIndex.from_documents(documents, service_context=service_context)\n\nwhere I get ImportError: cannot import name 'LangchainEmbedding' from 'llama_index'\nHow can I solve? Is it related to the fact that I'm working on Colab?"} +{"id": "000335", "text": "I have deployed llm model locally which follows openai api schema. As it's endpoint follows openai schema, I don't want to write separate inference client.\nIs there any way we can utilize existing openai wrapper by langchain to do inference for my localhost model.\nI checked there is a openai adapter by langchain, but it seems like it require provider, which again I have to write separate client.\nOverall goal it to not write any redundant code as it's already been maintained by langchain and may change with time. We can modify our api wrt openai and it works out of the box.\nYour suggestion is appreciated."} +{"id": "000336", "text": "Trying to conenct postgresql with langchain.llm used - AzureOpenAI\nfrom langchain.llms import AzureOpenAI\n\nllms = AzureOpenAI( temperature=0,deployment_name=\"gpt3turbo\".......)\n\ntoolkit = SQLDatabaseToolkit(db=db,llm=llms)\n\nError:\nValidationError: 1 validation error for SQLDatabaseToolkit\nllm\n value is not a valid dict (type=type_error.dict)\n\nTried different versions of langchain"} +{"id": "000337", "text": "I am new to langchain and following a tutorial code as below\nfrom langchain.vectorstores import Chroma\nfrom langchain.embeddings.openai import OpenAIEmbeddings\npersist_directory = \"C:/Users/shang/Documents/test/\"\nembedding = OpenAIEmbeddings()\nvectordb = Chroma(persist_directory, embedding_function=embedding)\n\nit kept prompt error. Did I miss anything here? Thanks\n---------------------------------------------------------------------------\nOSError Traceback (most recent call last)\ng:\\My Drive\\DataScience\\LLM\\LongChain\\all_inclusive.ipynb Cell 5 line 5\n 3 persist_directory = 'C:/Users/shang/Documents/test/'\n 4 embedding = OpenAIEmbeddings()\n----> 5 vectordb = Chroma(persist_directory, embedding_function=embedding)\n\nFile ~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\langchain\\vectorstores\\chroma.py:81, in Chroma.__init__(self, collection_name, embedding_function, persist_directory, client_settings, collection_metadata, client, relevance_score_fn)\n 79 \"\"\"Initialize with a Chroma client.\"\"\"\n 80 try:\n---> 81 import chromadb\n 82 import chromadb.config\n 83 except ImportError:\n\nFile ~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\chromadb\\__init__.py:3\n 1 from typing import Dict, Optional\n 2 import logging\n----> 3 from chromadb.api.client import Client as ClientCreator\n 4 from chromadb.api.client import AdminClient as AdminClientCreator\n 5 import chromadb.config\n\nFile ~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\chromadb\\api\\client.py:31\n 27 from chromadb.types import Database, Tenant, Where, WhereDocument\n 28 import chromadb.utils.embedding_functions as ef\n---> 31 class SharedSystemClient:\n...\n 1011 os.stat() does.\n 1012 \"\"\"\n-> 1013 return os.stat(self, follow_symlinks=follow_symlinks)\n\nOSError: [WinError 433] A device which does not exist was specified: '.env'"} +{"id": "000338", "text": "I am using Llama2[7b model]-hugging face and lang-chain to do a simple address segregation/classification task. I want the model to find the city, state and country from the input string.I want my answer/query formatted in a particular way for a question-answering/ text-generation task.I understand that i can use FewShotPromptTemplate, where in i can show some examples to the LLM and get the output in the format i want.\nI generated a few examples to feed in as samples :\nexamples = [\n {\"input\": \"Plot No. 7, Sector 22, Noida, Uttar Pradesh, 201301, India\",\n \"Address\": \"Plot No. 7, Sector 22, Noida\",\n \"City\" : \"Noida\",\n \"State\" : \"Uttar Pradesh\",\n \"Country\" : \"India\"},\n\n\n {\"input\": \"Banjara Hills, Telangana, 500034, India\",\n \"Address\": \"Banjara Hills\",\n \"City\" : \"Not present\",\n \"State\" : \"Telangana\",\n \"Country\" : \"India\"},\n\n]\n\nI set the template\nexample_formatter_template = \"\"\"\ninput: {input},\nAddress : {Address},\nCity : {Address},\nState : {State},\nCountry : {Country},\n \\n\n\"\"\"\n# prompt\nexample_prompt = PromptTemplate(\n input_variables=[\"input\", \"Address\", \"City\", \"State\", \"Country\"],\n template=example_formatter_template)\n\nfew_shot_prompt = FewShotPromptTemplate(\n examples=examples,\n example_prompt=example_prompt,\n prefix=\"What is the address, city, state, country in the string : \",\n suffix=\"input: {input}\\n \",\n input_variables=[\"input\"],\n example_separator=\"\\n\")\n\n\nchain = LLMChain(llm=llm, prompt=few_shot_prompt, verbose = True)\n\n# Run the chain only specifying the input variable.\nprint(chain.run(\"B-12, Gandhi Colony, Bhopal, Madhya Pradesh, 462016, India\"))\n\n\nHere is an example of what i want :\n {\"input\": \"B-12, Gandhi Colony, Bhopal, Madhya Pradesh, 462016, India\",\n\n \"Address\": \"B-12, Gandhi Colony\",\n \"City\" : \"Bhopal\",\n \"State\" : \"Madhya Pradesh\",\n \"Country\" : \"India\"},\n\n\n\nI keep getting : format the expected output correctly from the model. And nothing is hence returned.\nAdditionally, I want to prevent the model from adding any extra information which\nis not present in the context/string otherwise the queries take very long to respond.\ni.e return '' or not found if city or state or country is not present in sting.\ncan someone help ?"} +{"id": "000339", "text": "Here is a simple code to use Redis and embeddings but It's not clear how can I build and load own embeddings and then pull it from Redis and use in search\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores.redis import Redis\n\nembeddings = OpenAIEmbeddings\nmetadata = [\n {\n \"user\": \"john\",\n \"age\": 18,\n \"job\": \"engineer\",\n \"credit_score\": \"high\"\n }\n]\ntexts = [\"foo\", \"foo\", \"foo\", \"bar\", \"bar\"]\n\nrds = Redis.from_texts(\n texts,\n embeddings,\n metadata,\n redis_url=\"redis://localhost:6379\",\n index_name=\"users\",\n)\n\nresults = rds.similarity_search(\"foo\")\nprint(results[0].page_content)\n\nBut I want to load a text from e.g. text file, create embedings and load into Redis for later use. Something like this:\nfrom openai import OpenAI\nclient = OpenAI()\n\ndef get_embedding(text, model=\"text-embedding-ada-002\"):\n text = text.replace(\"\\n\", \" \")\n return client.embeddings.create(input = [text], model=model).data[0].embedding\n\nDoes anyone have good example to implement this approach? Also wondering about TTL for embedings in Redis"} +{"id": "000340", "text": "langchain python agent react differently, for one prompt, it can import scanpy library, but not for the other one. My question is how to make sure to import the correct library without problem.\nfrom dotenv import load_dotenv, find_dotenv\nimport openai\nimport os\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.agents.agent_types import AgentType\nfrom langchain_experimental.agents.agent_toolkits import create_python_agent\nfrom langchain_experimental.tools import PythonREPLTool\nimport scanpy as sc\n\nload_dotenv(find_dotenv())\nopenai.api_key = os.environ[\"OPENAI_API_KEY\"]\n\nagent_executor = create_python_agent(\n llm=ChatOpenAI(temperature=0, model=\"gpt-4-1106-preview\"),\n tool=PythonREPLTool(),\n verbose=True,\n agent_type=AgentType.OPENAI_FUNCTIONS,\n agent_executor_kwargs={\"handle_parsing_errors\": True},\n)\n\nif run the following,\nagent_executor.run(\"set scanpy setting verbosity = 3 \")\nI get\n> Entering new AgentExecutor chain...\n\nInvoking: Python_REPL with import scanpy as sc\nsc.settings.verbosity = 3\nprint(sc.settings.verbosity)\n\n\n3\nThe verbosity level of Scanpy has been set to 3.\n\n> Finished chain.\nThe verbosity level of Scanpy has been set to 3.\n\nbut, if run the following,\npbmc = sc.datasets.pbmc68k_reduced()\nagent_executor.run(\"use 'scanpy' library and 'pbmc' object to plot a umap\")\n\nI get,\n> Entering new AgentExecutor chain...\nPython REPL can execute arbitrary code. Use with caution.\n\nInvoking: Python_REPL with import scanpy as sc\n\n\n\nInvoking: Python_REPL with import scanpy as sc\nresponded: It seems there was an issue with the execution of the import statement for the 'scanpy' library. I will attempt to resolve this and proceed with the task. Let's try importing the library again.\n\nIt appears that there is an issue with importing the 'scanpy' library in this environment. Without being able to import the library, I cannot proceed with plotting a UMAP of the 'pbmc' object. If the library and the necessary data were available, I would typically load the data, preprocess it, and then use the sc.pl.umap function to plot the UMAP. However, since I cannot execute the code here, I'm unable to complete this task."} +{"id": "000341", "text": "My code uses \"wikipedia\" to search for the relevant content. Below is the code\nLoad tools\ntools = load_tools(\n [\"wikipedia\"],\n llm=llm)\nagent = initialize_agent(\n tools,\n llm,\n agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n handle_parsing_errors=True,\n verbose=False\n)\nout = agent(f\"Does {var_1} cause {var_2} or the other way around?.\")\n\nInstead of \"wikipedia\", I want to use my own pdf document that is available in my local. Can anyone help me in doing this?\nI have tried using the below code\nfrom langchain.document_loaders import PyPDFium2Loader\nloader = PyPDFium2Loader(\"hunter-350-dual-channel.pdf\")\ndata = loader.load()\n\nbut i am not sure how to include this in the agent."} +{"id": "000342", "text": "After installing pip install langchain-experimental I have tried:\nfrom langchain_experimental.sql_database import SQLDatabase\n\nBut it does not work. The code is as follows:\n# 1. Load db with langchain\nfrom langchain.sql_database import SQLDatabase\ndb = SQLDatabase.from_uri(\"sqlite:////python/chatopenai/ecommerce.db\")\n\n# 2. Import APIs\nimport a_env_vars\nimport os\nos.environ[\"OPENAI_API_KEY\"] = a_env_vars.OPENAI_API_KEY\n\n# 3. Create LLM\nfrom langchain.chat_models import ChatOpenAI\nllm = ChatOpenAI(temperature=0,model_name='gpt-3.5-turbo')\n\n# 4. Create chain\nfrom langchain import SQLDatabaseChain\ncadena = SQLDatabaseChain(llm = llm, database = db, verbose=False)\n\nAnd the error is:\nImportError: cannot import name 'SQLDatabaseChain' from 'langchain' (C:\\Users\\jcarr\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\__init__.py) Traceback: File \"C:\\Users\\jcarr\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\streamlit\\runtime\\scriptrunner\\script_runner.py\", line 534, in _run_script\n exec(code, module.__dict__) File \"C:\\python\\chatOpenAI\\c_front_end.py\", line 3, in \n import b_backend File \"C:\\python\\chatOpenAI\\b_backend.py\", line 15, in \n from langchain import SQLDatabaseChain\n\nThis is after doing the same with \"langchain.sql_database\"."} +{"id": "000343", "text": "LangChain's BaseMessage has a function toJSON that returns a Serialized.\nOnce I have a list of BaseMessages, I can use toJSON to serialize them, but how can I later deserialize them?\nconst messages = [\n new HumanMessage(\"hello\"),\n new AIMessage(\"foo\"),\n new HumanMessage(\"bar\"),\n new AIMessage(\"baz\"),\n];\n\nconst serialized = messages.map((message) => message.toJSON());\n\nconst deserialized = ???"} +{"id": "000344", "text": "I am using the llama2 quantized model from Huggingface and loading it using ctransformers from langchain. When I run the query, I got the below warning\nNumber of tokens (512) exceeded maximum context length (512)\nBelow is my code:\nfrom langchain.llms import CTransformers\nllm = CTransformers(model='models_k/llama-2-7b-chat.ggmlv3.q2_K.bin',\n model_type='llama',\n config={'max_new_tokens': 512,\n 'temperature': 0.01}\n )\n\nB_INST, E_INST = \"[INST]\", \"[/INST]\"\nB_SYS, E_SYS = \"<>\\n\", \"\\n<>\\n\\n\"\n\nDEFAULT_SYSTEM_PROMPT=\"\"\"\\\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible. \nPlease ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. \nIf you don't know the answer to a question, please don't share false information.\"\"\"\n\ninstruction = db_schema + \" Based on the database schema provided to you \\n Convert the following text from natural language to sql query: \\n\\n {text} \\n only display the sql query\"\n\nSYSTEM_PROMPT = B_SYS + DEFAULT_SYSTEM_PROMPT + E_SYS\n\ntemplate = B_INST + SYSTEM_PROMPT + instruction + E_INST\n\nprompt = PromptTemplate(template=template, input_variables=[\"text\"])\nLLM_Chain=LLMChain(prompt=prompt, llm=llm)\nprint(LLM_Chain.run(\"List the names and prices of electronic products that cost less than $500.\"))\n\nCan anyone tell me why am i getting this error? Do I have to change the settings?"} +{"id": "000345", "text": "I'm using Langchain 0.0.345. I cannot get a verbose output of what's going on under the hood using the LCEL approach to chain building.\nI have this code:\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.prompts import ChatPromptTemplate\nfrom langchain.schema.output_parser import StrOutputParser\nfrom langchain.globals import set_verbose\n\nset_verbose(True)\n\nprompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\nmodel = ChatOpenAI()\noutput_parser = StrOutputParser()\n\nchain = prompt | model | output_parser\n\nchain.invoke({\"topic\": \"ice cream\"})\n\nAccording to the documentation using set_verbose is the way to have a verbose output showing intermediate steps, prompt builds etc. But the output of this script is just a string without any intermediate steps.\nActually, the module langchain.globals does not appear even mentioned in the API documentation.\nI have also tried setting the verbose=True parameter in the model creation, but it also does not work. This used to work with the former approach building with classes and so.\nHow is the recommended and current approach to have the output logged so you can understand what's going on?\nThanks!"} +{"id": "000346", "text": "This piece of code seems to not work. Even though this is the way that Pinecone have stated in their documentation that it should look like.\nvectorstore = Pinecone(index, embeddings.embed_query, text_field)\nThe error/warning is\nC:\\Users\\ndira\\casetext-test-server\\Lib\\site-packages\\langchain\\vectorstores\\pinecone.py:59: UserWarning: Passing in \"embedding\" as a Callable is deprecated. Please pass in an Embeddings object instead. warnings.warn(\nI don't know any other way of solving this. Kindly help thanks."} +{"id": "000347", "text": "I loaded pdf files from a directory and I need to split them to smaller chunks to make a summary. The problem is that I can't iterate on documents object in a for loop and I get an error like this: AttributeError: 'tuple' object has no attribute 'page_content'\nHow can I iterate on my document items to call the summary function for each of them?\nHere is my code:\n# Load the documents\n\nfrom langchain.document_loaders import DirectoryLoader\ndocument_directory = \"pdf_files\"\nloader = DirectoryLoader(document_directory)\ndocuments = loader.load()\n\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=50)\n\n# Iterate on long pdf documents to make chunks (2 pdf files here)\nfor doc in documents:\n \n # it fails on this line \n texts = text_splitter.split_documents(doc) \n chain = load_summarize_chain(llm, chain_type=\"map_reduce\", map_prompt=prompt, combine_prompt=prompt)"} +{"id": "000348", "text": "I have an LLM Chat model with token limitation.\nI am trying to pass Sample User Messages and Expected AI Message Responses to the LLM to train it how to provide a response based on text extracted from a document.\nI am loading the document with System Loader\n Document document = loadDocument(toPath(\"file:///filepath\\\\filename.pdf\"));\n\nI am using regex splitter to help the LLM understand a pattern\n DocumentByRegexSplitter splitter=new DocumentByRegexSplitter(regex,joiner,maxCharLimit,maxOverlap,subSplitter);\n\nAfter embedding the document (In-Memory embedding store and getting the relevant vectors), I join it into an information string which I can feed into a prompt template to generate a User Message\nPromptTemplate promptTemplate = PromptTemplate.from(\n \"Answer the following question to the best of your ability\"\n + \"Question:\\n\"\n + \"{{question}}\\n\"\n + \"\\n\"\n + \"Base your answer on the following information:\\n\"\n + \"{{information}}\");\n\nString information = relevantEmbeddings.stream()\n .map(match -> match.embedded().text())\n .collect(joining(\"\\n\\n\"));\n\nMap variables = new HashMap<>();\nvariables.put(\"question\", trainingQuestion);\nvariables.put(\"information\", information);\nPrompt prompt = promptTemplate.apply(variables);\n\n\nList chatMessages=new ArrayList<>();\nchatMessages.add(prompt .toUserMessage());\nchatMessages.add(new AiMessage(\"Expected Response\"));\n\n variables.put(\"question\", actualQuestion);\n variables.put(\"information\", information);\n prompt = promptTemplate.apply(variables);\nchatMessages.add(prompt .toUserMessage());\n\nI will add the traning messages to a List as required by the Java Langchain framework\nAiMessage response=chatModel.generate(chatMessages);\n\nTo make a long story short, I am facing the token constraint because of embedding the same document information for all the Few Shot messages.\nIs there a way to make the LLM use the same document as a reference for the Few-Shot training and the actual query so I can avoid consuming tokens for the document multiple times?"} +{"id": "000349", "text": "I was getting an error when trying to use a Pydantic schema as an args_schema parameter value on a @tool decorator, following the DeepLearning.AI course.\nMy code was:\nfrom pydantic import BaseModel, Field\n\nclass SearchInput(BaseModel):\n query: str = Field(description=\"Thing to search for\")\n\n@tool(args_schema=SearchInput)\ndef search(query: str) -> str:\n \"\"\"Searches for weather online\"\"\"\n return \"21c\"\n\nAnd was getting this error:\nValidationError: 1 validation error for StructuredTool\nargs_schema subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)"} +{"id": "000350", "text": "I ran the bellow code. However, currently, it shows an error massage.\nfrom langchain.llms import GooglePalm\n\napi_key = 'my_API'\n\n\nllm = GooglePalm(google_api_key=api_key,\n temperature=0.1)\n\nThis is the error I got.\nNotImplementedError Traceback (most recent call last)\n in ()\n 5 \n 6 # Create llm variable here\n----> 7 llm = GooglePalm(google_api_key=api_key,\n 8 temperature=0.1)\n\n2 frames\n/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py in warn_deprecated(since, message, name, alternative, pending, obj_type, addendum, removal)\n 293 if not removal:\n 294 removal = f\"in {removal}\" if removal else \"within ?? minor releases\"\n--> 295 raise NotImplementedError(\n 296 f\"Need to determine which default deprecation schedule to use. \"\n 297 f\"{removal}\"\n\nNotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases\n\nCan someone please help me to solve this?\nI need to create the large language model variable."} +{"id": "000351", "text": "I am generating chromba db which has vector embeddings for pdf different documents and I want to store them to avoid re computation every time for different inputs. Pickling and Json serialization does not seem to work for chroma object, importing from another file also makes the embedding script run again."} +{"id": "000352", "text": "I'm currently working on a project that involves Language Models (LLMs) and Chat Models, and I'm using the langchain library in Python to list available models. However, I'm encountering an ImportError when running the code.\nHere's the code snippet I'm using:\nfrom langchain.chat_models import list_available_models\nmodel_names = list_available_models()\nprint(model_names)\n\nThe error message I receive is as follows:\nImportError: cannot import name 'list_available_models' from 'langchain.chat_models' (c:\\Users\\Edge\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\langchain\\chat_models\\__init__.py)\n\nI've double-checked the library and the code, but I can't seem to find a solution to this issue. Could someone please help me understand what might be causing this ImportError and how I can resolve it?"} +{"id": "000353", "text": "i'm trying to create a chatbot using OpenAi Langchain and a cloud database (MongoDb in my case). What I do, is load a PDF, I read the data, create chunks from it and then create embeddings using \"text-embedding-ada-002\" by OpenAi. After that I store in my DB the filename, the text of the PDF the list of embeddings, and the list of messages. It works good, but the problem is that i want to load the list of embeddings to create the Conversation Chain, but i don't know if it is possible to create it from the list of embeddings of i should save another thing and not the list of embeddings, because i don't want to create them each time i open the chat of the current PDf\nIf i use something like this to generate the vector store and then run the below code to create the conversation chain it works, but i want to load the list of embeddings i saved in the db\ndef get_embeddings(chunks: list[str]):\n embeddings = OpenAIEmbeddings()\n vector_store = MongoDBAtlasVectorSearch.from_texts(\n texts=chunks,\n embedding=embeddings,\n collection=embeddings_collection,\n index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n )\n return vector_store\n\ndef get_conversation_chain(vector_store):\n memory = ConversationBufferMemory(\n memory_key=\"chat_history\",\n vector_store=vector_store,\n similarity_threshold=0.8,\n max_memory_size=100,\n return_messages=True,\n input_key=\"question\")\n conversation_chain = ConversationalRetrievalChain.from_llm(\n retriever=vector_store.as_retriever(),\n llm=llm, \n memory=memory)\n result = conversation_chain({\"question\": \"what is the text about\"})\n print(result)\n return conversation_chain\n\nIs there a way to create a vector_store from the list of embeddings i saved? or should i use another type of conversation chain?"} +{"id": "000354", "text": "I am using Langchain to connect to OpenAi and some basic python calculation. below is the code i am using:\nfrom langchain.llms.fake import FakeListLLM\nfrom langchain.agents import load_tools\nfrom langchain.agents import initialize_agent\nfrom langchain.agents import AgentType\ntools = load_tools([\"python_repl\"])\nresponses=[\"Action: Python REPL\\nAction Input: print(2 + 2)\",\n\"Final Answer: 4\"\n]\nllm = FakeListLLM(responses=responses)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, \nverbose=True)\nagent.run(\"whats 2 + 2\"). \n\nI am referred to the langchain document and the code seems to be fine, there is no sysntexical error. in another code i called another library and created a new object here as:\nfrom langchain_experimental.utilities import PythonREPL\npython_repl = PythonREPL()\n\nThis code when ran on a simple instance runs:\nexample:\npython_repl.run(\"print(10+34)\")\n\nBut when i try to call python_repl from load_tool it throws error as ValueError: Got unknown tool python_repl. what is missed in the above code block."} +{"id": "000355", "text": "I'm trying to setup a local chatbot demo for testing purpose. I wanted to use LangChain as the framework and LLAMA as the model. Tutorials I found all involve some registration, API key, HuggingFace, etc, which seems unnecessary for my purpose.\nIs there a way to use a local LLAMA comaptible model file just for testing purpose? And also an example code to use the model with LangChain would be appreciated. Thanks!\nUPDATE: I wrote a blog post based on the accepted answer."} +{"id": "000356", "text": "I am trying to create RAG using the product manuals in pdf which are splitted, indexed and stored in Chroma persisted on a disk. When I try the function that classifies the reviews using the documents context, below is the error I get:\n\nfrom langchain import PromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\nfrom langchain_core.runnables import RunnablePassthrough\nfrom langchain.embeddings import AzureOpenAIEmbeddings\nfrom langchain.chat_models import AzureChatOpenAI\nfrom langchain.vectorstores import Chroma\n\nllm = AzureChatOpenAI(\n azure_deployment=\"ChatGPT-16K\",\n openai_api_version=\"2023-05-15\",\n azure_endpoint=endpoint,\n api_key=result[\"access_token\"],\n temperature=0,\n seed = 100\n )\n\nembedding_model = AzureOpenAIEmbeddings(\n api_version=\"2023-05-15\",\n azure_endpoint=endpoint,\n api_key=result[\"access_token\"],\n azure_deployment=\"ada002\",\n)\n\nvectordb = Chroma(\n persist_directory=vector_db_path,\n embedding_function=embedding_model,\n collection_name=\"product_manuals\",\n)\n\n\ndef format_docs(docs):\n return \"\\n\\n\".join(doc.page_content for doc in docs)\n\ndef classify (review_title, review_text, product_num):\n\n template = \"\"\"\n \n You are a customer service AI Assistant that handles responses to negative product reviews. \n\n Use the context below and categorize {review_title} and {review_text} into defect, misuse or poor quality categories based only on provided context. If you don't know, say that you do not know, don't try to make up an answer. Respond back with an answer in the following format:\n\n poor quality\n misuse\n defect\n\n {context}\n \n Category: \n \"\"\"\n\n\n rag_prompt = PromptTemplate.from_template(template)\n \n retriever = vectordb.as_retriever(search_type=\"similarity\", search_kwargs={'filter': {'product_num': product_num}})\n\n\n retrieval_chain = (\n {\"context\": retriever | format_docs, \"review_title: RunnablePassthrough(), \"review_text\": RunnablePassthrough()}\n | rag_prompt\n | llm\n | StrOutputParser()\n )\n return retrieval_chain.invoke({\"review_title\": review_title, \"review_text\": review_text})\n\nclassify(review_title=\"Terrible\", review_text =\"This baking sheet is terrible. It stains so easily and i've tried everything to get it clean\", product_num =\"8888999\")\n\nError stack:\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\nFile , line 1\n----> 1 issue_recommendation(\n 2 review_title=\"Terrible\",\n 3 review_text=\"This baking sheet is terrible. It stains so easily and i've tried everything to get it clean. I've maybe used it 5 times and it looks like it's 20 years old. The side of the pan also hold water, so when you pick it up off the drying rack, water runs out. I would never purchase these again.\",\n 4 product_num=\"8888999\"\n 5 \n 6 )\n\nFile , line 44, in issue_recommendation(review_title, review_text, product_num)\n 36 retriever = vectordb.as_retriever(search_type=\"similarity\", search_kwargs={'filter': {'product_num': product_num}})\n 38 retrieval_chain = (\n 39 {\"context\": retriever | format_docs, \"review_text\": RunnablePassthrough()}\n 40 | rag_prompt\n 41 | llm\n 42 | StrOutputParser()\n 43 )\n---> 44 return retrieval_chain.invoke({\"review_title\":review_title, \"review_text\": review_text})\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:1762, in RunnableSequence.invoke(self, input, config)\n 1760 try:\n 1761 for i, step in enumerate(self.steps):\n-> 1762 input = step.invoke(\n 1763 input,\n 1764 # mark each step as a child run\n 1765 patch_config(\n 1766 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")\n 1767 ),\n 1768 )\n 1769 # finish the root run\n 1770 except BaseException as e:\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:2327, in RunnableParallel.invoke(self, input, config)\n 2314 with get_executor_for_config(config) as executor:\n 2315 futures = [\n 2316 executor.submit(\n 2317 step.invoke,\n (...)\n 2325 for key, step in steps.items()\n 2326 ]\n-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}\n 2328 # finish the root run\n 2329 except BaseException as e:\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:2327, in (.0)\n 2314 with get_executor_for_config(config) as executor:\n 2315 futures = [\n 2316 executor.submit(\n 2317 step.invoke,\n (...)\n 2325 for key, step in steps.items()\n 2326 ]\n-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}\n 2328 # finish the root run\n 2329 except BaseException as e:\n\nFile /usr/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)\n 449 raise CancelledError()\n 450 elif self._state == FINISHED:\n--> 451 return self.__get_result()\n 453 self._condition.wait(timeout)\n 455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:\n\nFile /usr/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)\n 401 if self._exception:\n 402 try:\n--> 403 raise self._exception\n 404 finally:\n 405 # Break a reference cycle with the exception in self._exception\n 406 self = None\n\nFile /usr/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)\n 55 return\n 57 try:\n---> 58 result = self.fn(*self.args, **self.kwargs)\n 59 except BaseException as exc:\n 60 self.future.set_exception(exc)\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:1762, in RunnableSequence.invoke(self, input, config)\n 1760 try:\n 1761 for i, step in enumerate(self.steps):\n-> 1762 input = step.invoke(\n 1763 input,\n 1764 # mark each step as a child run\n 1765 patch_config(\n 1766 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")\n 1767 ),\n 1768 )\n 1769 # finish the root run\n 1770 except BaseException as e:\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:121, in BaseRetriever.invoke(self, input, config)\n 117 def invoke(\n 118 self, input: str, config: Optional[RunnableConfig] = None\n 119 ) -> List[Document]:\n 120 config = ensure_config(config)\n--> 121 return self.get_relevant_documents(\n 122 input,\n 123 callbacks=config.get(\"callbacks\"),\n 124 tags=config.get(\"tags\"),\n 125 metadata=config.get(\"metadata\"),\n 126 run_name=config.get(\"run_name\"),\n 127 )\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:223, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)\n 221 except Exception as e:\n 222 run_manager.on_retriever_error(e)\n--> 223 raise e\n 224 else:\n 225 run_manager.on_retriever_end(\n 226 result,\n 227 **kwargs,\n 228 )\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:216, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)\n 214 _kwargs = kwargs if self._expects_other_args else {}\n 215 if self._new_arg_supported:\n--> 216 result = self._get_relevant_documents(\n 217 query, run_manager=run_manager, **_kwargs\n 218 )\n 219 else:\n 220 result = self._get_relevant_documents(query, **_kwargs)\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/vectorstores.py:654, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)\n 650 def _get_relevant_documents(\n 651 self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n 652 ) -> List[Document]:\n 653 if self.search_type == \"similarity\":\n--> 654 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)\n 655 elif self.search_type == \"similarity_score_threshold\":\n 656 docs_and_similarities = (\n 657 self.vectorstore.similarity_search_with_relevance_scores(\n 658 query, **self.search_kwargs\n 659 )\n 660 )\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:348, in Chroma.similarity_search(self, query, k, filter, **kwargs)\n 331 def similarity_search(\n 332 self,\n 333 query: str,\n (...)\n 336 **kwargs: Any,\n 337 ) -> List[Document]:\n 338 \"\"\"Run similarity search with Chroma.\n 339 \n 340 Args:\n (...)\n 346 List[Document]: List of documents most similar to the query text.\n 347 \"\"\"\n--> 348 docs_and_scores = self.similarity_search_with_score(\n 349 query, k, filter=filter, **kwargs\n 350 )\n 351 return [doc for doc, _ in docs_and_scores]\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:437, in Chroma.similarity_search_with_score(self, query, k, filter, where_document, **kwargs)\n 429 results = self.__query_collection(\n 430 query_texts=[query],\n 431 n_results=k,\n (...)\n 434 **kwargs,\n 435 )\n 436 else:\n--> 437 query_embedding = self._embedding_function.embed_query(query)\n 438 results = self.__query_collection(\n 439 query_embeddings=[query_embedding],\n 440 n_results=k,\n (...)\n 443 **kwargs,\n 444 )\n 446 return _results_to_docs_and_scores(results)\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:691, in OpenAIEmbeddings.embed_query(self, text)\n 682 def embed_query(self, text: str) -> List[float]:\n 683 \"\"\"Call out to OpenAI's embedding endpoint for embedding query text.\n 684 \n 685 Args:\n (...)\n 689 Embedding for the text.\n 690 \"\"\"\n--> 691 return self.embed_documents([text])[0]\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:662, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)\n 659 # NOTE: to keep things simple, we assume the list may contain texts longer\n 660 # than the maximum context and use length-safe embedding function.\n 661 engine = cast(str, self.deployment)\n--> 662 return self._get_len_safe_embeddings(texts, engine=engine)\n\nFile /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:465, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)\n 459 if self.model.endswith(\"001\"):\n 460 # See: https://github.com/openai/openai-python/\n 461 # issues/418#issuecomment-1525939500\n 462 # replace newlines, which can negatively affect performance.\n 463 text = text.replace(\"\\n\", \" \")\n--> 465 token = encoding.encode(\n 466 text=text,\n 467 allowed_special=self.allowed_special,\n 468 disallowed_special=self.disallowed_special,\n 469 )\n 471 # Split tokens into chunks respecting the embedding_ctx_length\n 472 for j in range(0, len(token), self.embedding_ctx_length):\n\nFile /databricks/python/lib/python3.10/site-packages/tiktoken/core.py:116, in Encoding.encode(self, text, allowed_special, disallowed_special)\n 114 if not isinstance(disallowed_special, frozenset):\n 115 disallowed_special = frozenset(disallowed_special)\n--> 116 if match := _special_token_regex(disallowed_special).search(text):\n 117 raise_disallowed_special_token(match.group())\n 119 try:\n\nTypeError: expected string or buffer\n\n\nEmbeddings seems to work fine when I test. It also works fine when I remove the context and retriever from the chain. It seems to be related to embeddings. Examples on Langchain website instantiates retriver from Chroma.from_documents() whereas I load Chroma vector store from a persisted path. I also tried invoking with review_text only (instead of review title and review text) but the error persists. Not sure why this is happening. These are the package versions I work:\nName: openai\nVersion: 1.6.1\nName: langchain\nVersion: 0.0.354"} +{"id": "000357", "text": "Most samples of using LangChain's Expression Language (LCEL) look like this:\nchain = setup_and_retrieval | prompt | model | output_parser\n\nHow can I access the source_documents in a RAG application when using this expression language?"} +{"id": "000358", "text": "I making a project which uses chromadb (0.3.29), llama-index (0.6.34.post1) and langchain (0.0.245), and openai (0.27.8).But I am getting response None when I tried to query in custom pdfs.even they are getting embedded successfully , below are my codes:\nimport os, re\nimport shutil\nimport time\nfrom grpc import ServicerContext\nimport vectordb\nfrom langchain import OpenAI\nfrom llama_index import GPTTreeIndex, SimpleDirectoryReader, LLMPredictor,GPTVectorStoreIndex,PromptHelper, VectorStoreIndex\nfrom llama_index import LangchainEmbedding, ServiceContext, Prompt\nfrom llama_index import StorageContext, load_index_from_storage\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.llms import AzureOpenAI\n# Import Azure OpenAI\n#from langchain_community.llms import AzureOpenAI\nimport chromadb\nfrom llama_index.vector_stores import ChromaVectorStore\n \nfrom dotenv import load_dotenv\nload_dotenv()\n#openai.api_key = os.getenv[\"OPENAI_API_KEY\"]\n\n\n\ndef regenrate_tokens(collection_name,persist_directory): \n \n if os.path.isdir((persist_directory)):\n print(\"directory existed ,replacing previous directory\")\n shutil.rmtree(persist_directory)\n print(\"Recreating Embeddings...\")\n vector=vectordb.CreatingChromaDB(collection_name,persist_directory)\n vector.storage_context.persist(persist_dir= persist_directory)\n\n else:\n print(\"directory does not exit, creating new embeddings.\")\n vector=vectordb.CreatingChromaDB(collection_name,persist_directory)\n vector.storage_context.persist(persist_dir= persist_directory)\n \n time.sleep(10) # Sleep for 10 seconds\n\n return('Token regenrated, you can ask the questions. ')\n\ndef query__from_knowledge_base(question):\n persist_directory = './ChromaDb'\n collection_name = \"chromaVectorStore\"\n\n \n if(question == 'regenerate tokens'):\n return(regenrate_tokens(collection_name,persist_directory))\n \n index = vectordb.LoadFromDisk(collection_name,persist_directory)\n print(index)\n # define custom Prompt\n # TEMPLATE_STR = (\n # \"We have provided context information below. \\n\"\n # \"---------------------\\n\"\n # \"{context_str}\"\n # \"\\n---------------------\\n\"\n # \"Given this information, please answer the question: {query_str}\\n\"\n # )\n TEMPLATE_STR = \"\"\"Create a final answer to the given questions using the provided document excerpts(in no particular order) as references. ALWAYS include a \"SOURCES\" section in your answer including only the minimal set of sources needed to answer the question. Always include the Source Preview of source. If answer has step in document please response in step. If you are unable to answer the question, simply state that you do not know. Do not attempt to fabricate an answer and leave the SOURCES section empty.\n\n \"---------------------\\n\"\n \"{context_str}\"\n \"\\n---------------------\\n\"\n \"Given this information, please answer the question: {query_str}\\n\"\n \"\"\"\n\n QA_TEMPLATE = Prompt(TEMPLATE_STR)\n \n query_engine = index.as_query_engine(text_qa_template=QA_TEMPLATE)\n print(query_engine)\n response = query_engine.query(question)\n print(question)\n # print(response)\n response = str(response) \n response = re.sub(r'Answer:', '', response)\n response = response.strip()\n return(response)\n \n\n#print(regenrate_tokens())\n#print(query__from_knowledge_base('Enabling online archive for the user\u2019s mailbox.'))\n\nfile vectordb.py,\ncontaining creation and querying methods are below:\ndef CreatingChromaDB(collection_name,persist_directory):\n\n documents = SimpleDirectoryReader('./static/upload/').load_data()\n # deployment_name = \"text-davinci-003\"\n deployment_name = \"gpt-3.5-turbo\"\n openai_api_version=\"30/08/2023\"\n\n # Create LLM via Azure OpenAI Service\n llm = AzureOpenAI(deployment_name=deployment_name,openai_api_version=openai_api_version)\n llm_predictor = LLMPredictor(llm=llm)\n llm_predictor = LLMPredictor(llm = llm_predictor)\n embedding_llm = LangchainEmbedding(OpenAIEmbeddings())\n\n # Define prompt helper\n max_input_size = 3000\n num_output = 256\n chunk_size_limit = 1000 # token window size per document\n max_chunk_overlap = 20 # overlap for each token fragment\n prompt_helper = PromptHelper(max_input_size=max_input_size, num_output=num_output,\n max_chunk_overlap=max_chunk_overlap, chunk_size_limit=chunk_size_limit)\n\n service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, embed_model=embedding_llm, prompt_helper=prompt_helper)\n chroma_client = chromadb.Client(Settings(\n chroma_db_impl=\"duckdb+parquet\",\n persist_directory= persist_directory))\n\n print(collection_name)\n\n # create a collection\n chroma_collection = chroma_client.get_or_create_collection(collection_name,embedding_function=embedding_llm)\n # https://docs.trychroma.com/api-reference\n print(chroma_collection.count())\n\n vector_store = ChromaVectorStore(chroma_collection)\n storage_context = StorageContext.from_defaults(vector_store=vector_store)\n index = GPTVectorStoreIndex.from_documents(documents, storage_context=storage_context, service_context=service_context)\n print(chroma_collection.count())\n print(chroma_collection.get()['documents'])\n print(chroma_collection.get()['metadatas'])\n\n # index.storage_context.persist()\n return index\n\ndef LoadFromDisk(collection_name,persist_directory):\n chroma_client = chromadb.Client(Settings(\n chroma_db_impl=\"duckdb+parquet\",\n persist_directory= persist_directory))\n\n print(collection_name)\n\n chroma_collection = chroma_client.get_or_create_collection(collection_name)\n vector_store = ChromaVectorStore(chroma_collection=chroma_collection)\n index = GPTVectorStoreIndex.from_vector_store(vector_store=vector_store)\n return index\n\nif we tried to regenerate tokens and try to query from pdfs then its shows \"None\" response, even if those files are embedded properly."} +{"id": "000359", "text": "Following LangChain docs in my Jupyter notebook with the following code :\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\n\n\nprompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\nmodel = ChatOpenAI(model=\"gpt-3.5-turbo\")\noutput_parser = StrOutputParser()\n\nchain = prompt | model | output_parser\n\nDocs say that pip install langchain installs all necessary modules, including langchain-community and langchain-core\nHowever, I get this error:\nModuleNotFoundError: No module named 'langchain_openai'"} +{"id": "000360", "text": "I'm having trouble using LangChain embedding with Azure OpenAI credentials - it's showing a 404 error for resource not found.\nstack trace: Error: 404 Resource not found\n at APIError.generate (c:\\abcproject\\node_modules\\openai\\error.js:53:20\n\nimport { OpenAIEmbeddings } from \"@langchain/openai\"\n\nexport const embeddingModel = new OpenAIEmbeddings({ \n azureOpenAIApiKey: \"AzureOpenAI api key\",\n azureOpenAIApiVersion: \"2023-08-01-preview\",\n azureOpenAIApiDeploymentName: \"gpt-4-32k\",\n azureOpenAIBasePath:\"Azure OpenAI endpoint\"\n});"} +{"id": "000361", "text": "When I run the RAG chain code with OpenAI from langchain it gives me warning like this:\nPydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/\n warnings.warn('The `dict` method is deprecated; use `model_dump` instead.', DeprecationWarning)\n\nI have not place to replace dict with model_dump and I even have not encode it anywhere in my code. Any idea how to solve this warning?\nHere is my code:\nfrom client_setup import get_client\n#from langchain_community.vectorstores import Weaviate\n#from langchain_openai.OpenAI import OpenAI\nfrom langchain_openai import OpenAI\nfrom langchain_community.vectorstores import Weaviate\nfrom langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever\nfrom langchain.prompts import ChatPromptTemplate\nfrom langchain.schema.runnable import RunnablePassthrough\nfrom langchain.schema.output_parser import StrOutputParser\n\n\nclient = get_client()\n\nretriever = WeaviateHybridSearchRetriever(\n client=client, \n index_name=\"Material\",\n text_key=\"su\",\n attributes=[\"material\", \"heat_treatment\", \"su\", \"sy\"],\n create_schema_if_missing=True,\n )\n\nllm = OpenAI(api_key=\"\", model_name=\"gpt-3.5-turbo-instruct\")\n\n\ntemplate = \"\"\"You are an assistant for question-answering tasks. \nUse the following pieces of retrieved context to answer the question. \nIf you don't know the answer, just say that you don't know. \nUse three sentences maximum and keep the answer concise.\nQuestion: {question} \nContext: {context} \nAnswer:\n\"\"\"\nprompt = ChatPromptTemplate.from_template(template)\n\n#print(prompt)\nquery = \"What heat treatment is used for Steel SAE 1040?\"\n\nrag_chain = (\n {\"context\": retriever,\n \"question\": RunnablePassthrough()}\n | prompt\n | llm\n | StrOutputParser()\n )\n\n\nresult = rag_chain.invoke(query)\nprint(result)"} +{"id": "000362", "text": "When I try sample code given here:\nfrom langchain.document_loaders import ConfluenceLoader\n\nloader = ConfluenceLoader(\n url=\"\", username=\"\", \n api_key=\"\"\n)\ndocuments = loader.load(space_key=\"\", include_attachments=True, limit=1, max_pages=1)\n\nI get an error:\nAttributeError: 'str' object has no attribute 'get'\n\nHere is the last part of the stack:\n 554 \"\"\"\n 555 Get all pages from space\n 556 \n (...)\n 568 :return:\n 569 \"\"\"\n 570 return self.get_all_pages_from_space_raw(\n 571 space=space, start=start, limit=limit, status=status, expand=expand, content_type=content_type\n--> 572 ).get(\"results\")\n\nAny ideas? I see an issue here but it is still open.\nI have now also opened bug specifically for this issue.\nHere is the summary of the fixes required in the original code:\n\nDo not suffix the URL with /wiki/home\nsuffix the user name with @ your domain name\nuse ID of the space as in the URL and not its display name\n\nthen it works. The error handling is poor to point to these issues otherwise."} +{"id": "000363", "text": "I am confused by how multiple messages are combined and sent to a large language model such as ChatOpenAI.\nfrom langchain_core.prompts import ChatPromptTemplate\n\ntemplate = ChatPromptTemplate.from_messages([\n (\"system\", \"You are a helpful AI bot. Your name is {name}.\"),\n (\"human\", \"Hello, how are you doing?\"),\n (\"ai\", \"I'm doing well, thanks!\"),\n (\"human\", \"{user_input}\"),\n])\n\nmessages = template.format_messages(\n name=\"Bob\",\n user_input=\"What is your name?\"\n)\n\nmessages\n\n[SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),\n HumanMessage(content='Hello, how are you doing?'),\n AIMessage(content=\"I'm doing well, thanks!\"),\n HumanMessage(content='What is your name?')]\n\nIs it generating text that looks like this:\nSystem:\nHuman:\nAssistant:\nHuman:\n...\n\nHow can I print the final text sent to the llm?"} +{"id": "000364", "text": "I wanted to add additional metadata to the documents being embedded and loaded into Chroma.\nI'm unable to find a way to add metadata to documents loaded using\nChroma.from_documents(documents, embeddings)\nFor example, imagine I have a text file having details of a particular disease, I wanted to add species as a metadata that is a list of all species it affects.\nAs a round-about way I loaded it in a chromadb collection by adding required metadata and persisted it\nclient = chromadb.PersistentClient(path=\"chromaDB\")\n\ncollection = client.get_or_create_collection(name=\"test\",\n embedding_function=openai_ef,\n metadata={\"hnsw:space\": \"cosine\"})\n\ncollection.add(\n documents=documents,\n ids=ids,\n metadatas=metadata\n)\n\nThis was the result,\ncollection.get(include=['embeddings','metadatas'])\n\nOutput:\n\n{'ids': ['id0',\n'id1',\n'embeddings': [[-0.014580891467630863,\n0.0003901976451743394,\n0.00793908629566431,\n-0.027648288756608963,\n-0.009689063765108585,\n0.010222840122878551,\n-0.00946609303355217,\n-0.002771923551335931,\n-0.04675614833831787,\n-0.02056729979813099,\n0.014364678412675858,\n...\n{'species': 'XYZ', 'source': 'Flu.txt'},\n{'species': 'ABC', 'source': 'Common_cold.txt'}],\n'documents': None,\n'uris': None,\n'data': None}\n\nNow I tried loading it from the directory persisted in the disk using Chroma.from_documents()\ndb = Chroma(persist_directory=\"chromaDB\", embedding_function=embeddings)\n\nBut I don't see anything loaded. db.get() results in this,\ndb.get(include=['metadatas'])\n\nOutput:\n\n{'ids': [],\n'embeddings': None,\n'metadatas': [],\n'documents': None,\n'uris': None,\n'data': None}\n\nPlease help. Need to load metadata to the files being loaded."} +{"id": "000365", "text": "I want to create a local LLM using falcon 40b instruct model and combine it with lanchain so I can give it a pdf or some resource to learn from so I can query it ask it questions, learn from it and ultimately be able to derive insights from the pdf report from an Excel sheet.\nFor now, I just want to load a pdf using langchain and have the falcon-40b-instruct model as the agent.\nI want to build an llm where I can make it interact with my own data using langchain.\nHere is my attempt so far:\nfrom langchain_community.llms import HuggingFaceHub\n\nllm = HuggingFaceHub(\nrepo_id=model_name,\ntask=\"text-generation\",\nmodel_kwargs={\n\"max_new_tokens\": 512,\n\"top_k\": 30,\n\"temperature\": 0.1,\n\"repetition_penalty\": 1.03\n},\nhuggingfacehub_api_token=\"hf_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\n)\n\nI reached the following stage:\nfrom langchain_community.chat_models.huggingface import ChatHuggingFace\nllm = ChatHuggingFace(llm=llm)\n\nyet I get this error:\n\nHfHubHTTPError: 401 Client Error: Unauthorized for url\n\nI am doing do this to be able to run the following:\nqa_chain = RetrievalQA.from_chain_type(\nllm=llm,\nretriever=vector_db.as_retriever()\n)\n\nWhat am I missing and is there a way to be able to do this fully local like doing the falcon model and pass it to ChatHuggingFace?"} +{"id": "000366", "text": "I am in the process of building a RAG like the one in this Video. However, I cannot import FAISS like this.\nfrom langchain.vectorstores import FAISS\n\nLangChainDeprecationWarning: Importing vector stores from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:\n\n`from langchain_community.vectorstores import faiss`.\n\nHowever, it is possible to import faiss:\nfrom langchain_community.vectorstores import faiss\n\nBut with this it is not possible to call faiss.from_text().\nvectorstore = faiss.from_texts(text=text_chunks, embeddings=embeddings)\n\nAttributeError: module 'langchain_community.vectorstores.faiss' has no attribute 'from_texts'\n\nIs it no longer possible to call .from_text() with the current one? I didn't find anything about this in the documentation.\nPython=3.10.13"} +{"id": "000367", "text": "I have a cluster, that is not connected to the internet, although has a sort of weights repository available. I need to run LLM inference on it.\nThe only option that I found until now is using combination of transformers and langchain modules, but I don't want to tweak hyperparameters of models. I ran into ollama software, but I cannot install anything on cluster, except from python libs. So, naturally I wonder, what are my options for running LLM inference? And there is some more questions.\n\nCan I just install ollama-python package and not install their linux software? Or do I need both to run my inference?\nIf I manage to install ollama on this cluster, how can I provide pretrained weights to the model? If it helps, they are stored in (sometime multiple) .bin files"} +{"id": "000368", "text": "I'm loading pdfs using langchain.document_loaders:\nloader = DirectoryLoader( './files/', glob='*.pdf', loader_cls=PyPDFLoader)\nthen splitted the docs, created the embeddings, stored and loaded them :\ndocsearch = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)\n\n...\n\ndocsearch = Chroma(persist_directory, embedding_function=embeddings ) \nretriever = docsearch.as_retriever( search_kwargs={\"k\": 5})\ndocs = retriever.get_relevant_documents( query )\nlen( docs)\n\nI'm getting a correct response but I'm getting 0 source documents."} +{"id": "000369", "text": "The below def load_documents function is able to load various documents such as .docx, .txt, and .pdf into langchain. I would also like to be able to load power point documents and found a script here: https://python.langchain.com/docs/integrations/document_loaders that I added to below function.\nHowever, the function is unable to read .pptx files because I am not able to pip install UnstructuredPowerPointLoader. Can somebody please suggest a way to do this or to augment below function so I can load .pptx files?\nPython function follows below:\ndef load_document(file):\n import os\n name, extension = os.path.splitext(file)\n\n if extension == '.pdf':\n from langchain.document_loaders import PyPDFLoader\n print(f'Loading {file}')\n loader = PyPDFLoader(file)\n elif extension == '.docx':\n from langchain.document_loaders import Docx2txtLoader\n print(f'Loading {file}')\n loader = Docx2txtLoader(file)\n elif extension == '.txt':\n from langchain.document_loaders import TextLoader\n print(f'Loading {file}')\n loader = TextLoader(file)\n elif extension == '.pptx':\n from langchain_community.document_loaders import UnstructuredPowerPointLoader\n print(f'Loading {file}')\n loader = UnstructuredPowerPointLoader(file)\n else:\n print('Document format is not supported!')\n return None\n\n data = loader.load()\n return data\n\nThe error I am getting is because !pip install unstructured is failing. I tried also tried !pip install -q unstructured[\"all-docs\"]==0.12.0 but was unsuccessful again. Appreciate any help!"} +{"id": "000370", "text": "I'm trying to do the following simple code:\nfrom transformers import pipeline\nimport langchain\nfrom langchain.llms import HuggingFacePipeline\n\nmodel_name = \"bert-base-uncased\"\ntask = \"question-answering\"\n\nhf_pipeline = pipeline(task, model=model_name)\n\nlangchain_pipeline = HuggingFacePipeline(hf_pipeline)\n\nI get the following error:\n\nERROR: TypeError: Serializable.__init__() takes 1 positional argument but 2 were given\nLINE: langchain_pipeline = HuggingFacePipeline(hf_pipeline)\n\n\nHaven't found anything online that actually helped me here\n\n\nI'm using Databricks with the following cluster:\n\nRuntime: 12.2 LTS ML (includes Apache Spark 3.3.2, Scala 2.12)\nNode type: Standard_DS5_v2 56 GB Memory, 16 Cores\nLibraries:"} +{"id": "000371", "text": "The current code I have below works when I use gpt-3.5-turbo-instruct however when I use gpt-4 it doesn't work. I like using this framework because i care about getting only sql code back which the other agents don't do. How do I change it so that I can use other gpt models.\nfrom langchain.chains import create_sql_query_chain\nconnection_string = \"\"\ndb = SQLDatabase.from_uri(connection_string)\nllm = OpenAI(temperature=0, verbose=True, model='gpt-4')\n\nseed_prompt = \"\"\"\nGiven an input question, create a syntactically correct MySQL SQL query to run.\n\nQuestion: \"Question here\"\nSQLQuery: \"SQL Query to run\"\n\n\"\"\"\n\nrestrictions = \"\"\"\nNever use LIMIT statement, use TOP statement instead.\nFormat all numeric response ###,###,###,###.\nOnly return relevant columns to the question.\nIf a table or column does not exist, return table or column could not be found.\nQuestion: {input}\n\"\"\"\n\nprompt = seed_prompt + restrictions\nPROMPT = PromptTemplate(\n input_variables=[\"input\"], template=prompt\n)\n\ndatabase_chain = create_sql_query_chain(llm,db, prompt=PROMPT)\nsql_query = database_chain.invoke({\"question\": x})\nprint(sql_query)"} +{"id": "000372", "text": "I'm trying to pass filters to redis retriever to do hybrid search on my embeddings (vector + metadata filtering). The following doesn't work! It fails to pass the filters and filters would always be None:\nretriever = redis.as_retriever(\n search_type=\"similarity_distance_threshold\",\n search_kwargs=\"{'include_metadata': True,'distance_threshold': 0.8,'k': 5}\",\n filter=\"(@launch:{false} @menu_text:(%%chicken%%))\"\n )\n\nI found another example and apparently filter expression should be pass as search_kwargs, but I can't figure out what should be the correct syntax. If I do it as follow:\nretriever = redis.as_retriever(\n search_type=\"similarity_distance_threshold\",\n \"retriever_search_kwargs\":\"{'include_metadata': True,'distance_threshold': 0.8,'k': 5, 'filter': '@menu_text:(%%chicken%%) @lunch:{true}'}\",\n}\n\nit generates this search query:\nsimilarity_search_by_vector > redis_query : (@content_vector:[VECTOR_RANGE $distance_threshold $vector] @menu_text:(%%chicken%%) @lunch:{true})=>{$yield_distance_as: distance}\nand fails with the following error:\nredis.exceptions.ResponseError: Invalid attribute yield_distance_as\nAny idea how to fix it?\nSystem Info:\nlangchain 0.0.346\nlangchain-core 0.0.10\npython 3.9.18"} +{"id": "000373", "text": "I already installed InstructorEmbedding, but it keeps giving me the error, in jupyter notebook environment using Python 3.12 (I also tried in 3.11). Kernel restarting didn't help.\nimport torch\nfrom langchain.embeddings import HuggingFaceInstructEmbeddings\n\nDEVICE = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n\n\nembedding = HuggingFaceInstructEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\", model_kwargs={\"device\": DEVICE})\n\nerror:\n---------------------------------------------------------------------------\nModuleNotFoundError Traceback (most recent call last)\nFile /opt/conda/lib/python3.11/site-packages/langchain_community/embeddings/huggingface.py:151, in HuggingFaceInstructEmbeddings.__init__(self, **kwargs)\n 150 try:\n--> 151 from InstructorEmbedding import INSTRUCTOR\n 153 self.client = INSTRUCTOR(\n 154 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n 155 )\n\nFile /opt/conda/lib/python3.11/site-packages/InstructorEmbedding/__init__.py:1\n----> 1 from .instructor import *\n\nFile /opt/conda/lib/python3.11/site-packages/InstructorEmbedding/instructor.py:9\n 8 from torch import Tensor, device\n----> 9 from sentence_transformers import SentenceTransformer\n 10 from sentence_transformers.models import Transformer\n\nModuleNotFoundError: No module named 'sentence_transformers'\n\nThe above exception was the direct cause of the following exception:\n\nImportError Traceback (most recent call last)\nCell In[2], line 10\n 4 DEVICE = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n 6 #loader = PyPDFDirectoryLoader(\"aircraft_pdfs\")\n 7 #docs = loader.load()\n 8 #print(len(docs)) # length of all pages together\n---> 10 embedding = HuggingFaceInstructEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\", model_kwargs={\"device\": DEVICE})\n\nFile /opt/conda/lib/python3.11/site-packages/langchain_community/embeddings/huggingface.py:157, in HuggingFaceInstructEmbeddings.__init__(self, **kwargs)\n 153 self.client = INSTRUCTOR(\n 154 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs\n 155 )\n 156 except ImportError as e:\n--> 157 raise ImportError(\"Dependencies for InstructorEmbedding not found.\") from e\n\nImportError: Dependencies for InstructorEmbedding not found.\n\nhere is the output of pip freeze\ntransformers==4.37.2\ntorch==2.2.0\nlangchain==0.1.6\nInstructorEmbedding==1.0.1\n..."} +{"id": "000374", "text": "I was previously using SQLDatabaseChain to connect LLM (Language Model) with my database, and it was functioning correctly with GPT-3.5. However, when attempting the same process with GPT-4, I encountered an error stating \"incorrect syntax near 's\"\nTo address this issue, I opted to use SQLDatabaseToolkit and the create_sql_agent function. However, I encountered a problem with this approach as I was unable to pass a prompt. When attempting to include a PromptTemplate in the create_sql_agent argument, it resulted in errors.\nValueError: Prompt missing required variables: {'tool_names', 'agent_scratchpad', 'tools'}\nBelow is my code:\ntoolkit = SQLDatabaseToolkit(db=db, llm=llm)\n\nagent_executor = create_sql_agent(\n llm=llm,\n toolkit=toolkit,\n verbose=True,\n prompt=MSSQL_PROMPT,\n)"} +{"id": "000375", "text": "I am trying to use the NebulaGraphStore class from llama_index via from llama_index.graph_stores.nebula import NebulaGraphStore as suggested by the llama_index documentation, but the following error occurred:\nModuleNotFoundError Traceback (most recent call last)\nCell In[2], line 1\n----> 1 from llama_index.graph_stores.nebula import NebulaGraphStore\n\nModuleNotFoundError: No module named 'llama_index.graph_stores'\n\nI tried updating llama_index (version 0.10.5) with pip install -U llama-index but it doesn't work. How can I resolve this?"} +{"id": "000376", "text": "I just upgrade LangChain and OpenAi using below conda install. Then I got below error, any idea how to solve it? Thanks\nhttps://anaconda.org/conda-forge/langchain\nconda install conda-forge::langchain\nhttps://anaconda.org/conda-forge/openai\nconda install conda-forge::openai\nfrom langchain.agents.agent_toolkits import create_python_agent\n\n\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nCell In[2], line 1\n----> 1 from langchain.agents.agent_toolkits import create_python_agent\n 2 from langchain.tools.python.tool import PythonREPLTool\n 3 from langchain.llms.openai import OpenAI\n\nFile :1075, in _handle_fromlist(module, fromlist, import_, recursive)\n\nFile c:\\Users\\yongn\\miniconda3\\envs\\langchain_ai\\lib\\site-packages\\langchain\\agents\\agent_toolkits\\__init__.py:50, in __getattr__(name)\n 48 \"\"\"Get attr name.\"\"\"\n 49 if name in DEPRECATED_AGENTS:\n---> 50 relative_path = as_import_path(Path(__file__).parent, suffix=name)\n 51 old_path = \"langchain.\" + relative_path\n 52 new_path = \"langchain_experimental.\" + relative_path\n\nFile c:\\Users\\test\\miniconda3\\envs\\langchain_ai\\lib\\site-packages\\langchain_core\\_api\\path.py:30, in as_import_path(file, suffix, relative_to)\n 28 if isinstance(file, str):\n 29 file = Path(file)\n---> 30 path = get_relative_path(file, relative_to=relative_to)\n 31 if file.is_file():\n 32 path = path[: -len(file.suffix)]\n\nFile c:\\Users\\test\\miniconda3\\envs\\langchain_ai\\lib\\site-packages\\langchain_core\\_api\\path.py:18, in get_relative_path(file, relative_to)\n 16 if isinstance(file, str):\n 17 file = Path(file)\n---> 18 return str(file.relative_to(relative_to))\n\nFile c:\\Users\\test\\miniconda3\\envs\\langchain_ai\\lib\\pathlib.py:818, in PurePath.relative_to(self, *other)\n 816 if (root or drv) if n == 0 else cf(abs_parts[:n]) != cf(to_abs_parts):\n 817 formatted = self._format_parsed_parts(to_drv, to_root, to_parts)\n--> 818 raise ValueError(\"{!r} is not in the subpath of {!r}\"\n 819 \" OR one path is relative and the other is absolute.\"\n 820 .format(str(self), str(formatted)))\n 821 return self._from_parsed_parts('', root if n == 1 else '',\n 822 abs_parts[n:])\n\nValueError: 'c:\\\\Users\\\\test\\\\miniconda3\\\\envs\\\\langchain_ai\\\\lib\\\\site-packages\\\\langchain\\\\agents\\\\agent_toolkits' is not in the subpath of 'c:\\\\Users\\\\test\\\\miniconda3\\\\envs\\\\langchain_ai\\\\lib\\\\site-packages\\\\langchain_core' OR one path is relative and the other is absolute."} +{"id": "000377", "text": "I am using StructuredParser of Langchain library. I am getting flat dictionary from parser. Please guide me to get a list of dictionaries from output parser.\nPROMPT_TEMPLATE = \"\"\" \nYou are an android developer. \nParse this error message and provide me identifiers & texts mentioend in error message. \n--------\nError message is {msg}\n--------\n{format_instructions}\n\"\"\"\n\ndef get_output_parser():\n missing_id = ResponseSchema(name=\"identifier\", description=\"This is missing identifier.\")\n missing_text = ResponseSchema(name=\"text\", description=\"This is missing text.\")\n\n response_schemas = [missing_id, missing_text]\n output_parser = StructuredOutputParser.from_response_schemas(response_schemas)\n return output_parser\n\n\ndef predict_result(msg):\n model = ChatOpenAI(open_api_key=\"\", openai_api_base=\"\", model=\"llama-2-70b-chat-hf\", temperature=0, max_tokens=2000)\n output_parser = get_output_parser()\n format_instructions = output_parser.get_format_instructions()\n \n prompt = ChatPromptTemplate.from_template(template=PROMPT_TEMPLATE)\n message = prompt.format_messages(msg=msg, format_instructions=format_instructions)\n response = model.invoke(message)\n\n response_as_dict = output_parser.parse(response.content)\n print(response_as_dict)\n\n\npredict_result(\"ObjectNotFoundException AnyOf(AllOf(withId:identifier1, withText:text1),AllOf(withId:identifier2, withText:text1),AllOf(withId:identifier3, withText:text1))\")\n\nThe output I get is\n{\n \"identifier\":\"identifier1\",\n \"text\":\"text1\"\n}\n\n\nExpected output is\n[\n {\n \"identifier\":\"identifier1\",\n \"text\":\"text1\"\n },\n {\n \"identifier\":\"identifier2\",\n \"text\":\"text1\"\n },\n {\n \"identifier\":\"identifier3\",\n \"text\":\"text1\"\n }\n]\n\nHow to specify such nested JSON in OutputParser"} +{"id": "000378", "text": "Hello i am trying to run this following code but i am getting an error;\nfrom langchain.schema import BaseOuputParser\n\nError;\n\nImportError: cannot import name 'BaseOuputParser' from\n'langchain.schema'\n\nMy langchain version is ; '0.1.7'"} +{"id": "000379", "text": "I'm trying to understand what the correct strategy is for storing and using embeddings in a vector database, to be used with an LLM. If my goal is to reduce the amount of work the LLM has to do when generating a response, (So you can think of a RAG implementation where I've stored text, embeddings I've created using an LLM, and metadata about the text.) I'm then trying to generate responses using say openai model from queries about the data, and I don't want to have to spend a bunch of money and time chunking up the text and creating embeddings for it every time I want to answer a query about it.\nIf I create a vector database, for example a chroma database and I use an LLM to create embeddings for a corpus I have. I save those embeddings into the vector database, along with the text and metadata. Would the database use those embeddings I created to find the relevant text chunks, or would it make more sense for the vector database to use it's own query process to find the relevant chunks (not using the embeddings the LLM created)?\nAlso do I want to pass the embeddings from the vector database to the LLM to generate the response, or do I pass the text that the vectore database found was most relevant to the LLM along with original text query so the LLM can then generate a response?"} +{"id": "000380", "text": "I'm creating a QA bot with RAG and aiming to provide the specific documents from which the answers are extracted.\nRetrieval QA uses k documents which are semantically similar to query to generate the answer. The answer need not be in all the k documents, how can we know which documents out of the k documents the answer is extracted from?\nHow can we know which of those source documents that LLM extracted the answer from?"} +{"id": "000381", "text": "Been stuck on trying to add variables to the prompt. template using this example from Langchain Docs as a starting point.\nValidationError: 1 validation error for ConversationChain __root__ Got unexpected prompt input variables. The prompt expects ['history', 'input', 'daily_context'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error\n\nAny suggestions here to get past the validation?\ndaily_context = \"It is a Saturday, the office is open.\"\n\ntemplate = \"\"\"You are Jane's Boss. \n\nHere is some context about today: {daily_context}\n\nCurrent conversation:\n{history}\nJane: {input}\nAI:\"\"\"\nPROMPT = PromptTemplate(input_variables=[ \"history\", \"input\", \"daily_context\"], template=template)\n\nllm = OpenAI(temperature=0.4, model=\"gpt-3.5-turbo-instruct\")\nconversation = ConversationChain(\n prompt=PROMPT,\n llm=llm,\n verbose=True,\n memory=ConversationBufferMemory(),\n)"} +{"id": "000382", "text": "From the langchain documentation - Per-User Retrieval\n\nWhen building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachother\u2019s data. This means that you need to be able to configure your retrieval chain to only retrieve certain information.\n\nThe documentation has an example implementation using PineconeVectorStore. Does chromadb support multiple users? If yes, can anyone help with an example of how the per-user retrieval can be implemented using the open source ChromaDB?"} +{"id": "000383", "text": "Not a coding question, but a documentation omission that is nowhere mentioned online at this point. When using the Langchain CSVLoader, which column is being vectorized via the OpenAI embeddings I am using?\nI ask because viewing this code below, I vectorized a sample CSV, did searches (on Pinecone) and consistently received back DISsimilar responses. How do know which column Langchain is actually identifying to vectorize?\nloader = CSVLoader(file_path=file, metadata_columns=['col2', 'col3', 'col4','col5'])\nlangchain_docs = loader.load()\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=100)\ndocs = text_splitter.split_documents(langchain_docs)\nfor doc in docs:\n doc.metadata.pop('source')\n doc.metadata.pop('row')\nmy_index = pc_store.from_documents(docs, embeddings, index_name=PINECONE_INDEX_NAME)\n\nI am assuming the CSVLoader is then identifying col1 to vectorize. But, searches of Pinecone are terrible, leading me to think some other column is being vectorized."} +{"id": "000384", "text": "I built a QnA App with Flowise.\nUntil now I used the ChatOpenAI node together with the OpenAI Embeddings.\nToday, I wanted to try the Anthropic Claude LLM, but couldnt find specific Anthropic Embeddings. So, curiously, I used the OpenAI Embeddings just to see what would happen.\nI expected the response to not work, or to be complete gibberish because I thought Embeddings were model specific?\nBut facscinatingly I got a perfect response.\nCan someone please explain how this is possible? I thought embeddings had to be learned model specificaly? My complete understanding of embeddings is shattered.\nThis is my Flowise chatflow:\n\nEdit:\nIs it possible, that the documents are embedded by openai, and my prompts are also embedded with openai, to retrieve the texts with highest similarity? Then the texts and my prompt are both passed to claude?"} +{"id": "000385", "text": "I have successfully connected to a Redshift database like below and got all the table names;\nconn = psycopg2.connect(host,db,port,username,password)\ncursor.execute(\"SELECT tablename FROM pg_tables GROUP BY tablename ORDER BY tablename\")\n\nHowever, when I connect using langchain and sqlalchemy like below, get_usable_table_names returns few of many tables in the database;\npg_url = f\"postgresql+psycopg2://{db_user}:{db_password}@{db_host}:{port_}/{db_}\"\ndb_engine = create_engine(pg_url)\ndb = SQLDatabase(db_engine)\nllm = OpenAI(temperature=0.0, openai_api_key=OPENAI_API_KEY, model='gpt-3.5-turbo')\n\ntable_names = \"\\n\".join(db.get_usable_table_names())\n\nAnyone has any suggestions on what might be the issue?\nI have tried querying a missing table by;\ndb.run(\"SELECT * FROM db_schema.missing_table_name\") \n\nand this works. However, I need SQLDatabase from langchain.sql_database module to detect the tables right without specifying one by one. (Because I would like to Chat With Sql Database Using Langchain & OpenAI)"} +{"id": "000386", "text": "I am using the below code and for the same question, it return different results, is there any way to fix that?\nfrom langchain.chains import create_sql_query_chain\nfrom langchain_openai import ChatOpenAI\nfrom langchain_community.utilities import SQLDatabase\nimport os\n\ndef return_query(question)\n db = SQLDatabase.from_uri(os.getenv(\"POSTGRES_URL\"))\n llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n chain = create_sql_query_chain(llm, db)\n response = chain.invoke({\"question\": question})\n return response\n\nExample my question is \"create table student\" and i get the below responses on re-trying the same code:\n\nResponse1: This table does not exist in the provided database schema.\nResponse2: SELECT * FROM information_schema.tables WHERE table_name = 'student' LIMIT 1;\nResponse3: This question cannot be answered directly using the existing tables provided in the database schema. To create a new table named \"student\", you can use the following SQL query:\n\nCREATE TABLE student (\n id SERIAL PRIMARY KEY,\n name TEXT NOT NULL,\n email TEXT NOT NULL,\n age INTEGER,\n major TEXT\n);"} +{"id": "000387", "text": "I'm trying to use the Langchain ReAct Agents and I want to give them my pinecone index for context. I couldn't find any interface that let me provide the LLM that uses the ReAct chain my vector embeddings as well.\nHere I set up the LLM and retrieve my vector embedding.\nllm = ChatOpenAI(temperature=0.1, model_name=\"gpt-4\")\nretriever = vector_store.as_retriever(search_type='similarity', search_kwargs={'k': k})\n\nHere I start my ReAct Chain.\nprompt = hub.pull(\"hwchase17/structured-chat-agent\")\nagent = create_structured_chat_agent(llm, tools, prompt)\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\nresult = agent_executor.invoke(\n {\n \"input\": question,\n \"chat_history\": chat_history\n }\n)\n\nBefore using the ReAct Agent, I used the vector embedding like this.\ncrc = ConversationalRetrievalChain.from_llm(llm, retriever)\nresult = crc.invoke({'question': systemPrompt, 'chat_history': chat_history})\nchat_history.append((question, result['answer']))\n\nIs there any way to combine both methods and have a ReAct agent that also uses vector Embeddings?"} +{"id": "000388", "text": "I am trying to create a AI chatbot backend with langchain and fastAPI. I've managed to generate an output based on a user query. The bot embeds a context hardcoded.\nHowever the response I got includes everything (the query, the full template as well as the actual bot's answer). Is there any way to get the bot's answer as answer ?\nThank you\ntemplate = \"\"\"\nYou are Bot, a AI bot to help user of my portfolio if they need to. Always be thankfull with the user from showing interest to my portfolio.\nDepending on user's question you need to point them to the right section of the portfolio. \nIf they want to contact me they can use the contact form or visit one of my social media via the links\n\nMy name is John Doe and this is my personal portfolio where I display my interests:\n - chess with my chess.com stats\n - running with my strava stats result\n - some pictures showing my accomplishments and my passion for travels and mountains\n - a contact form to reach me out\n - links to my different social media (facebook, linkedin, github, chess.com and strava)\n\nUser query : {question}\nBot's answer :\"\"\"\n\napp = FastAPI()\n\n@app.post(\"/conversation\")\nasync def read_conversation(query:str):\n\n repo_id = \"mistralai/Mistral-7B-Instruct-v0.2\"\n\n llm1 = HuggingFaceHub(\n repo_id=repo_id, \n model_kwargs={\"temperature\" : 0.7}\n )\n\n prompt = PromptTemplate(\n input_variables=[\"question\"], template=template\n )\n chain = LLMChain(llm=llm1, prompt=prompt)\n response = chain.invoke({\"question\":query})\n\n return {\"response\" : response}"} +{"id": "000389", "text": "I am currently trying to use the Helsinki-NLP/opus-mt-en-de and de-en models. I was trying to setup a pipeline and use both as LLMChain but I keep getting the same error:\nValueError: The following `model_kwargs` are not used by the model: ['pipeline_kwargs', 'return_full_text'] (note: typos in the generate arguments will also show up in this list)\n\nI used the following snippet to initialise both models and ran the snippet after to test the output:\ndef get_translation_chains():\n _de_en_translation_prompt = PromptTemplate.from_template(\n \"\"\"Translate the following text from German to English:\n {text}\n \"\"\"\n )\n\n _en_de_translation_prompt = PromptTemplate.from_template(\n \"\"\"Translate the following text from English to German:\n {text}\n \"\"\"\n )\n\n _en_to_de_tokenizer = AutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-de\")\n _en_to_de_model = AutoModelForSeq2SeqLM.from_pretrained(\"Helsinki-NLP/opus-mt-en-de\")\n _de_to_en_tokenizer = AutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-de-en\")\n _de_to_en_model = AutoModelForSeq2SeqLM.from_pretrained(\"Helsinki-NLP/opus-mt-de-en\")\n\n _en_to_de_pipeline = pipeline(\n model=_en_to_de_model,\n tokenizer=_en_to_de_tokenizer,\n task=\"translation\",\n )\n\n _de_to_en_pipeline = pipeline(\n model=_de_to_en_model,\n tokenizer=_de_to_en_tokenizer,\n task=\"translation\",\n )\n\n _de_to_en_llm = HuggingFacePipeline(pipeline=_de_to_en_pipeline)\n _en_to_de_llm = HuggingFacePipeline(pipeline=_en_to_de_pipeline)\n\n _de_to_en_chain = LLMChain(\n prompt=_de_en_translation_prompt,\n llm=_de_to_en_llm,\n )\n\n _en_to_de_chain = LLMChain(\n prompt=_en_de_translation_prompt,\n llm=_en_to_de_llm,\n )\n\n return _en_to_de_chain, _de_to_en_chain\n\n\n\nen_to_de_chain, de_to_en_pipeline = get_translation_chains()\n\nprint(en_to_de_chain.invoke({\"text\": \"Hello, how are you?\"}))\n\nI am fairly new to using LLMs and both the huggingface and langchain libraries and could not find anything to give me a clue on this one.\nI tried to use the pipeline with only setting the task I wanted \"translation_de_to_en\" and the other way around as well as using \"translation\" only for both default and more detailed pipeline. I also tried to set the kwargs option to None and False but with no success"} +{"id": "000390", "text": "I am building a very simple rag application using Langchain. The problem I'm having is that when I use ChatOpenAI and ask a question. The model doesn't make any sentences when it answers, it doesn't behave like a \"chatbot\" unlike llama2 for example (see images below). When I switch from ChatOpenAI to llama2, I don't touch anything in my code except to comment on the model.\nMy data is based on openfoodfacts, which is why I ask for specific ingredients in the question.\nWhat's the problem and what can I do to get the same result as llama2 using ChatOpenAI ?\nChatOpenAI :\n\nLlama2:\n\nCode :\nfrom fastapi import FastAPI\nfrom langchain.vectorstores import FAISS\nfrom langchain_community.embeddings import HuggingFaceEmbeddings\nfrom langserve import add_routes\nfrom langchain_community.llms import Ollama\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.runnables import RunnableLambda, RunnablePassthrough\nfrom langchain.embeddings import OpenAIEmbeddings\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"SECRET\"\n\n# model = Ollama(model=\"llama2\")\nmodel = ChatOpenAI(temperature=0.1)\n\nimport pandas as pd\nproducts = pd.read_csv('./data/products.csv')\nvectorstore = FAISS.from_texts(\n products['text'], embedding=OpenAIEmbeddings()\n)\nretriever = vectorstore.as_retriever()\n\n\napp = FastAPI(\n title=\"LangChain Server\",\n version=\"1.0\",\n description=\"Spin up a simple api server using Langchain's Runnable interfaces\",\n)\n\nANSWER_TEMPLATE = \"\"\"Answer the question based on the following context:\n{context}\n\nQuestion: {question}\n\"\"\"\n\nprompt = ChatPromptTemplate.from_template(ANSWER_TEMPLATE)\n\nchain = (\n {\"context\": retriever, \"question\": RunnablePassthrough()}\n | prompt\n | model\n | StrOutputParser()\n)\n\n# Adds routes to the app for using the retriever under:\n# /invoke\n# /batch\n# /stream\nadd_routes(app, chain)\n\nif __name__ == \"__main__\":\n import uvicorn\n\n uvicorn.run(app, host=\"localhost\", port=8000)"} +{"id": "000391", "text": "I am trying to make some queries to my CSV files using Langchain and OpenAI API. I am able to run this code, but i am not sure why the results are limited to only 4 records out of 500 rows in CSV.\nI tried to print after loading from csv_loader, It shows all the records, so i am doing something wrong in embeddings/vectors. Can anyone please suggest what can i try?\n csv_loader = CSVLoader(csv_file_path)\n data = csv_loader.load()\n\n\n splitter = CharacterTextSplitter(separator = \"\\n\",\n chunk_size=500, \n chunk_overlap=0,\n length_function=len)\n documents = splitter.split_documents(data)\n\n\n embeddings = OpenAIEmbeddings()\n vectorstore = FAISS.from_documents(documents, embeddings)\n vectorstore.save_local(\"faiss_index_constitution\")\n persisted_vectorstore = FAISS.load_local(\"faiss_index_constitution\", embeddings, allow_dangerous_deserialization=True)\n query = \"What's the sum of amount of the transactions since 1 March 2024?\"\n\n retriever = persisted_vectorstore.as_retriever()\n\n chain = RetrievalQA.from_llm(llm=model, retriever=retriever, verbose=True)\n\n\n chain_input = {\"query\": query, \"context\": None}\n result = chain(chain_input)\n\n return result"} +{"id": "000392", "text": "I want to get the holidays and it's date from LLM agent as a list of dictionary. Following is the code I have used\nfrom langchain.agents import AgentType,initialize_agent,load_tools\nfrom langchain.prompts import ChatPromptTemplate\nfrom langchain.output_parsers import ResponseSchema,StructuredOutputParser\nfrom langchain_community.chat_models import ChatOpenAI\nimport os\n\ntools=load_tools([\"serpapi\"])\nllm=ChatOpenAI(model=\"gpt-4\",temperature=0.0)\n\nholiday=ResponseSchema(name=\"holiday\",description=\"this is the name of the holiday\")\ndate=ResponseSchema(name=\"holiday\",description=\"this is the date of the holiday in datetime pattern, ex: 2024-01-01\")\n\nresponse_schema=[holiday, date]\noutput_parser=StructuredOutputParser.from_response_schemas(response_schema)\nformat_instruction=output_parser.get_format_instructions()\nts=\"\"\"\nYou are an intelligent search master who can search internet using serpapi tool and retrieve the holidays in given country or region and you should find holiday and date in datetime pattern\nTake the input below delimited by tripe backticks and use it to search and analyse using serapi tool\ninput:```{input}```\n{format_instruction}\n\"\"\"\n\nprompt=ChatPromptTemplate.from_template(ts)\nfs=prompt.format_messages(input=\"holidays in Berlin in 2024\",format_instruction=format_instruction)\nagent=initialize_agent(tools,llm,agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,verbose=True, handle_parsing_errors=True)\nresponse=agent.run(fs)\noutput=output_parser.parse(response)\n\n\n\n Entering new AgentExecutor chain...\nThe user wants to know the holidays in Berlin in 2024. I need to search for this information and present it in a specific JSON format. \nAction: Search\nAction Input: holidays in Berlin in 2024\nObservation: [\"Berlin, Germany\u2019s capital, dates to the 13th century. Reminders of the city's turbulent 20th-century history include its Holocaust memorial and the Berlin Wall's graffitied remains. Divided during the Cold War, its 18th-century Brandenburg Gate has become a symbol of reunification. The city's also known for its art scene and modern landmarks like the gold-colored, swoop-roofed Berliner Philharmonie, built in 1963. \u2015 Google\", 'Berlin type: Capital of Germany.', 'Berlin entity_type: cities, locations, travel, travel.', 'Berlin kgmid: /m/0156q.', 'Berlin place_id: ChIJAVkDPzdOqEcRcDteW0YgIQQ.', 'Berlin age: About 787 years.', 'Berlin founded: 1237.', 'Berlin population: 3.645 million (2019) Eurostat.', 'Berlin area_code: 030.', 'Berlin metro_population: 6,144,600.', 'Berlin mayor: Kai Wegner.', 'Berlin School Holidays & Public Holidays 2024 ; Labour Day, May 01, 2024 (Wednesday) ; Ascension Day, May 09, 2024 (Thursday) ; Whit Monday, May 20, 2024 (Monday).', 'Berlin Public Holidays 2024 ; 9 May, Thu, Ascension Day ; 20 May, Mon, Whit Monday ; 3 Oct, Thu, Day of German Unity ; 25 Dec, Wed, Christmas Day.', 'May 1, 2024 (Wednesday), Labour Day (May Day) ; May 9, 2024 (Thursday), Ascension Day ; May 20, 2024 (Monday), Whit Monday ; October 3, 2024 ( ...', \"Public holidays in Berlin 2024. Public holiday, Date. New Year's Day, January 1, 2024 (Monday). International Women's Day, March 8, 2024 (Friday).\", \"On this page you can find the calendar of all 2024 public holidays for Berlin, Germany. New Year's DayMonday January 01, 2024. Mon January 01, 2024.\", 'List of Holidays in Berlin in 2024 ; Friday, Mar 29, Good Friday ; Monday, Apr 01, Easter Monday ; Wednesday, May 01, Labour Day ; Thursday, May 09, Ascension Day ...', \"Monday 1 January 2024, New Year's Day. Friday, 08 March 2024, International Women's Day / Internationaler Frauentag***. Friday, 29 March 2024, Good Friday.\", 'In Berlin, the public holidays for 2024 are as follows: 1. May 1, Wednesday - Labour Day [[1](https://publicholidays.de/berlin/2024-dates/)] ...', 'The list of Berlin (Germany) public holidays in 2024 is: ; May 20, Monday, Whit Monday ; Oct 3, Thursday, German Unity Day.', \"9th of November, Observance, Berlin. Nov 9, Saturday, Fall of the Berlin Wall, Observance. Nov 11, Monday, St. Martin's Day, Observance, Christian. Nov 17 ...\"]\nThought:I have found the information about the holidays in Berlin in 2024. Now I need to format this information into the requested JSON format. \nFinal Answer: \n```json\n{\n \"holiday\": \"New Year's Day\",\n \"date\": \"2024-01-01\",\n \"holiday\": \"International Women's Day\",\n \"date\": \"2024-03-08\",\n \"holiday\": \"Good Friday\",\n \"date\": \"2024-03-29\",\n \"holiday\": \"Labour Day\",\n \"date\": \"2024-05-01\",\n \"holiday\": \"Ascension Day\",\n \"date\": \"2024-05-09\",\n \"holiday\": \"Whit Monday\",\n \"date\": \"2024-05-20\",\n \"holiday\": \"Day of German Unity\",\n \"date\": \"2024-10-03\",\n \"holiday\": \"Christmas Day\",\n \"date\": \"2024-12-25\"\n}\n```\n\nbut the output I get is only one {'holiday': 'Christmas Day', 'date': '2024-12-25'}\nI want to get all the holidays and and dates."} +{"id": "000393", "text": "Am trying to create vector stores on top of my existing KG using from_existing_graph, (followed tomaz and Saurav Joshi neo4j blog posts) - this method is allowing me to create embedding/vector index only for single label due to which am unable to get desired results while asking NLQ (I am assuming though).\nbelow code is able to answer, the age and location of Oliver but not what he directed,\ni believe this is due to from_existing_graph has only to pass single label and its corresponding properties as option for generating embeddings and vector index\nAny ideas, how to achieve this?\nimport os\nimport re\nfrom langchain.vectorstores.neo4j_vector import Neo4jVector\n# from langchain.document_loaders import WikipediaLoader\nfrom langchain_openai import OpenAIEmbeddings\n# from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter\nfrom langchain.graphs import Neo4jGraph\nimport openai\n# from transformers import AutoModelForSeq2SeqLM, AutoTokenizer\n\nos.environ[\"OPENAI_API_KEY\"] = \"sk-xx\"\nurl = \"neo4j+s://xxxx.databases.neo4j.io\"\nusername = \"neo4j\"\npassword = \"mypassword\"\nexisting_graph = Neo4jVector.from_existing_graph(\n embedding=OpenAIEmbeddings(),\n url=url,\n username=username,\n password=password,\n index_name=\"person\",\n node_label=\"Person\",\n text_node_properties=[\"name\", \"age\", \"location\"],\n embedding_node_property=\"embedding\",\n)\n\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.chains import GraphCypherQAChain\nfrom langchain.graphs import Neo4jGraph\n\ngraph = Neo4jGraph(\n url=url, username=username, password=password\n)\n\nchain = GraphCypherQAChain.from_llm(\n ChatOpenAI(temperature=0), graph=graph, verbose=True\n)\n\nquery = \"Where does Oliver Stone live?\"\n#query = \"Name some films directed by Oliver Stone?\" \n\ngraph_result = chain.invoke(query)\n\nvector_results = existing_graph.similarity_search(query, k=1)\nfor i, res in enumerate(vector_results):\n print(res.page_content)\n if i != len(vector_results)-1:\n print()\nvector_result = vector_results[0].page_content\n\n# Construct prompt for OpenAI\nfinal_prompt = f\"\"\"You are a helpful question-answering agent. Your task is to analyze\nand synthesize information from two sources: the top result from a similarity search\n(unstructured information) and relevant data from a graph database (structured information).\nGiven the user's query: {query}, provide a meaningful and efficient answer based\non the insights derived from the following data:\n\nUnstructured information: {vector_result}.\nStructured information: {graph_result} \"\"\"\n\n\nfrom openai import OpenAI\nclient = OpenAI(\n # This is the default and can be omitted\n api_key=os.environ.get(\"OPENAI_API_KEY\"),\n)\n\nchat_completion = client.chat.completions.create(messages=[{\"role\": \"user\",\"content\": final_prompt, }],model=\"gpt-3.5-turbo\",)\n\nanswer = chat_completion.choices[0].message.content.strip()\nprint(answer)\n\nAny help would be highly appreicated?\nhere is my schema:\nNode properties are the following:\nPerson {name: STRING, embedding: LIST, age: INTEGER, location: STRING},Actor {name: STRING, embedding: LIST},Movie {title: STRING},Director {name: STRING, embedding: LIST, age: INTEGER, location: STRING}\nRelationship properties are the following:\nACTED_IN {role: STRING}\nThe relationships are the following:\n(:Person)-[:ACTED_IN]->(:Movie),(:Person)-[:DIRECTED]->(:Movie),(:Actor)-[:ACTED_IN]->(:Movie),(:Director)-[:DIRECTED]->(:Movie)\n\nCypher used to create:\nCREATE (charlie:Person:Actor {name: 'Charlie Sheen'})-[:ACTED_IN {role: 'Bud Fox'}]->(wallStreet:Movie {title: 'Wall Street'})<-[:DIRECTED]-(oliver:Person:Director {name: 'Oliver Stone'});\nMATCH (n:Person {name: 'Oliver Stone'}) SET n.age = 30, n.location = \"New York\" RETURN n"} +{"id": "000394", "text": "I am working on a chat application in Langchain, Python. The idea is that user submits some pdf files that the chat model is trained on and then asks questions from the model regarding those documents. The embeddings are stored in Chromadb vector database. So effectively a RAG-based solution.\nNow, both the creation and storage of embeddings are working fine and also chat is working good. However, I am storing my custom metadata to the embeddings and some ids. The code for that is given as under:\ndef read_docs(pdf_file):\n pdf_loader = PyPDFLoader(pdf_file)\n pdf_documents = pdf_loader.load()\n\n text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n documents = text_splitter.split_documents(pdf_documents)\n \n return documents\n\ndef generate_and_store_embeddings(documents, pdf_file, user_id):\n client = chromadb.PersistentClient(path=\"./trained_db\")\n collection = client.get_or_create_collection(\"PDF_Embeddings\", embedding_function=embedding_functions.OpenAIEmbeddingFunction(api_key=config[\"OPENAI_API_KEY\"], model_name=configs.EMBEDDINGS_MODEL))\n now = datetime.now()\n\n #custom metadata and ids I want to store along with the embeddings for each pdf\n metadata = {\"source\": pdf_file.filename, \"user\": str(user_id), 'created_at': \n now.strftime(\"%d/%m/%Y %H:%M:%S\")}\n ids = [str(uuid.uuid4()) for _ in range(len(documents))]\n\n try:\n vectordb = Chroma.from_documents(\n documents, \n embedding=OpenAIEmbeddings(openai_api_key=config[\"OPENAI_API_KEY\"], \n model=configs.EMBEDDINGS_MODEL),\n persist_directory='./trained_db',\n collection_name = collection.name, \n client = client,\n ids = ids,\n collection_metadata = {item: value for (item, value) in metadata.items()}\n )\n vectordb.persist()\n \n except Exception as err:\n print(f\"An error occured: {err=}, {type(err)=}\")\n return {\"answer\": \"An error occured while generating embeddings. Please check terminal \n for more details.\"}\n return vectordb\n\nNow, what I want is to retrieve those ids and metadata associated with the pdf file rather than all the ids/metadata in the collection. This is so that when a user enters the pdf file to delete the embeddings of, I can retrieve the metadata and the ids of that pdf file only so that I can use those IDs to delete the embeddings of the pdf file from the collection.\nI know the vectordb._collection.get() function but it will return all the IDs.\nI also used this code: print(vectordb.get(where={\"source\": pdf_file.filename})) but it returns\n\n{'ids': [], 'embeddings': None, 'metadatas': [], 'documents': [], 'uris': None, 'data': None}"} +{"id": "000395", "text": "I am implementing RAG on a Gemma-2B-it model using langchain's HuggingFaceEmbeddings and ConversationalRetrievalChain.\nWhen running:\nchat_history = []\nquestion = \"My prompt\"\nresult = qa.invoke({\"question\": question, \"chat_history\": chat_history})\n\n\nI get\n 276 \n 277 if self.pipeline.task == \"text-generation\":\n--> 278 text = response[\"generated_text\"]\n 279 elif self.pipeline.task == \"text2text-generation\":\n 280 text = response[\"generated_text\"]\n\nKeyError: 'generated_text'\n\nI don't understand why this is happening. It used to work and, today, it just stopped working. I have also tried using qa.run instead of invoke but it still raises the same exception.\nI have tried changing models, devices but nothing fixes it."} +{"id": "000396", "text": "One can obtain a ChatGPT response to a prompt using the following example:\nfrom openai import OpenAI\n\nclient = OpenAI() # requires key in OPEN_AI_KEY environment variable\n\ncompletion = client.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a poetic assistant, skilled in explaining complex programming concepts with creative flair.\"},\n {\"role\": \"user\", \"content\": \"Compose a poem that explains the concept of recursion in programming.\"}\n ]\n)\n\nprint(completion.choices[0].message.content)\n\nHow can one continue the conversation? I've seen examples saying you just add a new message to the list of messages and re-submit:\n# Continue the conversation by including the initial messages and adding a new one\ncontinued_completion = client.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a poetic assistant, skilled in explaining complex programming concepts with creative flair.\"},\n {\"role\": \"user\", \"content\": \"Compose a poem that explains the concept of recursion in programming.\"},\n {\"role\": \"assistant\", \"content\": initial_completion.choices[0].message.content}, # Include the initial response\n {\"role\": \"user\", \"content\": \"Can you elaborate more on how recursion can lead to infinite loops if not properly handled?\"} # New follow-up prompt\n ]\n)\n\nBut I would imagine this means processing the previous messages all over again at every new prompt, which seems quite wasteful. Is that really the only way? Isn't there a way to keep a \"session\" of some sort that keeps ChatGPT's internal state and just processes a newly given prompt?"} +{"id": "000397", "text": "I have been reading the documentation all day and can't seem to wrap my head around how I can create a VectorStoreIndex with llama_index and use the created embeddings as supplemental information for a RAG application/chatbot that can communicate with a user. I want to use llama_index because they have some cool ways to perform more advanced retrieval techniques like sentence window retrieval and auto-merging retrieval (to be fair I have not investigated if Langchain also supports these types of vector retrieval methods). I want to use LangChain because of its functionality for developing more complex prompt templates (similarly I have not really investigated if llama_index supports this).\nMy goal is to ultimately evaluate how these different retrieval methods perform within the context of the application/chatbot. I know how to evaluate them with a separate evaluation questions file, but I would like to do things like compare the speed and humanness of responses, token usage, etc.\nThe code for a minimal reproducible example would be as follows\n1) LangChain ChatBot initiation \n from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n from langchain.memory import ChatMessageHistory\n \n \n prompt = ChatPromptTemplate.from_messages(\n [\n (\n \"system\",\n \"\"\"You are the world's greatest... \\\n Use this document base to help you provide the best support possible to everyone you engage with. \n \"\"\",\n ),\n MessagesPlaceholder(variable_name=\"messages\"),\n ]\n )\n \n chat = ChatOpenAI(model=llm_model, temperature=0.7)\n \n \n \n chain = prompt | chat\n \n \n chat_history = ChatMessageHistory()\n \n while True:\n user_input = input(\"You: \")\n chat_history.add_user_message(user_input)\n \n response = chain.invoke({\"messages\": chat_history.messages})\n \n if user_input.lower() == 'exit':\n break\n \n print(\"AI:\", response)\n chat_history.add_ai_message(response)\n\n\nLlama index sentence window retrieval\n\nfrom llama_index.core.node_parser import SentenceWindowNodeParser\n from llama_index.core.indices.postprocessor import MetadataReplacementPostProcessor\n from llama_index.core.postprocessor import LLMRerank\n \n class SentenceWindowUtils:\n def __init__(self, documents, llm, embed_model, sentence_window_size):\n self.documents = documents\n self.llm = llm\n self.embed_model = embed_model\n self.sentence_window_size = sentence_window_size\n # self.save_dir = save_dir\n \n self.node_parser = SentenceWindowNodeParser.from_defaults(\n window_size=self.sentence_window_size,\n window_metadata_key=\"window\",\n original_text_metadata_key=\"original_text\",\n )\n \n self.sentence_context = ServiceContext.from_defaults(\n llm=self.llm,\n embed_model=self.embed_model,\n node_parser=self.node_parser,\n )\n \n def build_sentence_window_index(self, save_dir):\n if not os.path.exists(save_dir):\n os.makedirs(save_dir)\n sentence_index = VectorStoreIndex.from_documents(\n self.documents, service_context=self.sentence_context\n )\n sentence_index.storage_context.persist(persist_dir=save_dir)\n else:\n sentence_index = load_index_from_storage(\n StorageContext.from_defaults(persist_dir=save_dir),\n service_context=self.sentence_context,\n )\n \n return sentence_index\n \n def get_sentence_window_query_engine(self, sentence_index, similarity_top_k=6, rerank_top_n=3):\n postproc = MetadataReplacementPostProcessor(target_metadata_key=\"window\")\n rerank = LLMRerank(top_n=rerank_top_n, service_context=self.sentence_context)\n \n sentence_window_engine = sentence_index.as_query_engine(\n similarity_top_k=similarity_top_k, node_postprocessors=[postproc, rerank]\n )\n \n return sentence_window_engine\n \n \n sentence_window = SentenceWindowUtils(documents=documents, llm = llm, embed_model=embed_model, sentence_window_size=1)\n sentence_window_1 = sentence_window.build_sentence_window_index(save_dir='./indexes/sentence_window_index_1')\n sentence_window_engine_1 = sentence_window.get_sentence_window_query_engine(sentence_window_1)\n\nBoth blocks of code independently will run. But the goal is that when a query is performed that warrants a retrieval to the existing document base, I can use the sentence_window_engine that was built. I suppose I could retrieve relevant information based on the query and then pass that information into a subsequent prompt for the chatbot, but I would like to try and avoid including the document data in a prompt.\nAny suggestions?"} +{"id": "000398", "text": "The following function was working till a few days ago but now gives this error:\nValueError: Expected EmbeddingFunction._call_ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['args', 'kwargs']) Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface. Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023\nI am not sure what changes are necessary to work with this.\n` def create_chromadb(link): \n embedding_function = SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n loader = TextLoader(link)\n documents = loader.load()\n \n # Split the documents into chunks (no changes needed here)\n text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=500)\n chunks = text_splitter.split_documents(documents)\n \n # Update for new EmbeddingFunction definition\n # D is set to the type of documents (Text in this case)\n D = Union[str, List[str]] # Adjust based on your document format (single string or list of strings)\n embedding_function: EmbeddingFunction[D] = embedding_function\n \n # Initialize Chroma with the embedding function and persist the database\n db = Chroma.from_documents(chunks, embedding_function, ids=None, collection_name=\"langchain\", persist_directory=\"./chroma_db\")\n db.persist()\n print(f\"Saved {len(chunks)} chunks\")\n \n return db`\n\ndocs.trychroma.com/migration#migration-to-0416---november-7-2023"} +{"id": "000399", "text": "I'm trying to build a RAG using the Chroma database, but when I try to create it I have the following error : AttributeError: 'SentenceTransformer' object has no attribute 'embed_documents'. I saw that you can somehow fix it by modifying the Chroma library directly, but I don't have the rights for it on my environment. If someone has a piece of an advice, be pleased.\nThe ultimate goal is to use the index as a query engine for a chatbot. This is what I tried\nCode:\n#We load the chunks of texts and declare which column is to be embedded\nchunks = DataFrameLoader(final_df_for_chroma_injection,\n page_content_column='TEXT').load()\n\n#create the open-source embedding function\nembedding_model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2')\n#-Load the persist directory on which are stored the previous embeddings\n#-And add the new ones from chunks/embeddings\nindex = Chroma.from_documents(chunks,\n embedding_model,\n persist_directory=\"./chroma_db\")\n\nThis is the error I get:\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\nCell In[47], line 3\n 1 #-Load the persist directory on which are stored the previous embeddings\n 2 #-And add the new ones from chunks/embeddings\n----> 3 index = Chroma.from_documents(chunks,\n 4 embedding_model,\n 5 persist_directory=\"./chroma_db\")\n\nFile /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:778, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)\n 776 texts = [doc.page_content for doc in documents]\n 777 metadatas = [doc.metadata for doc in documents]\n--> 778 return cls.from_texts(\n 779 texts=texts,\n 780 embedding=embedding,\n 781 metadatas=metadatas,\n 782 ids=ids,\n 783 collection_name=collection_name,\n 784 persist_directory=persist_directory,\n 785 client_settings=client_settings,\n 786 client=client,\n 787 collection_metadata=collection_metadata,\n 788 **kwargs,\n 789 )\n\nFile /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:736, in Chroma.from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)\n 728 from chromadb.utils.batch_utils import create_batches\n 730 for batch in create_batches(\n 731 api=chroma_collection._client,\n 732 ids=ids,\n 733 metadatas=metadatas,\n 734 documents=texts,\n 735 ):\n--> 736 chroma_collection.add_texts(\n 737 texts=batch[3] if batch[3] else [],\n 738 metadatas=batch[2] if batch[2] else None,\n 739 ids=batch[0],\n 740 )\n 741 else:\n 742 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n\nFile /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:275, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs)\n 273 texts = list(texts)\n 274 if self._embedding_function is not None:\n--> 275 embeddings = self._embedding_function.embed_documents(texts)\n 276 if metadatas:\n 277 # fill metadatas with empty dicts if somebody\n 278 # did not specify metadata for all texts\n 279 length_diff = len(texts) - len(metadatas)\n\nFile /opt/anaconda3_envs/abeille_pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1688, in Module.__getattr__(self, name)\n 1686 if name in modules:\n 1687 return modules[name]\n-> 1688 raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\n\nAttributeError: 'SentenceTransformer' object has no attribute 'embed_documents'```"} +{"id": "000400", "text": "I am following quick start of Langchain to call open ai for LLM.\nhttps://python.langchain.com/docs/get_started/quickstart\nWhile running the below python code I am getting error.\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\n\nllm = ChatOpenAI(openai_api_key=\"sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\")\n\nprompt = ChatPromptTemplate.from_messages([\n (\"system\", \"You are world class technical documentation writer.\"),\n (\"user\", \"{input}\")\n])\n\nchain = prompt | llm \nchain.invoke({\"input\": \"how can langsmith help with testing?\"})\noutput_parser = StrOutputParser()\nchain = prompt | llm | output_parser\nchain.invoke({\"input\": \"how can langsmith help with testing?\"})\n\nI am getting below error:\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"c:\\Python\\Sysint_NPL2SQL\\.venv\\Lib\\site-packages\\openai\\_base_client.py\", line 902, in request\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"c:\\Python\\Sysint_NPL2SQL\\.venv\\Lib\\site-packages\\openai\\_base_client.py\", line 902, in request\n return self._request(\n ^^^^^^^^^^^^^^\n File \"c:\\Python\\Sysint_NPL2SQL\\.venv\\Lib\\site-packages\\openai\\_base_client.py\", line 993, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}\nPS C:\\Python\\Sysint_NPL2SQL> c:; cd 'c:\\Python\\Sysint_NPL2SQL'; & 'c:\\Python\\Sysint_NPL2SQL\\.venv\\Scripts\\python.exe' 'c:\\Users\\yatanveer.singh\\.vscode\\extensions\\ms-python.debugpy-2024.2.0-win32-x64\\bundled\\libs\\debugpy\\adapter/../..\\debugpy\\launcher' '64396' '--' 'C:\\Python\\Sysint_NPL2SQL\\langchaindemo.py'\nPS C:\\Python\\Sysint_NPL2SQL> c:; cd 'c:\\Python\\Sysint_NPL2SQL'; & 'c:\\Python\\Sysint_NPL2SQL\\.venv\\Scripts\\python.exe' 'c:\\Users\\yatanveer.singh\\.vscode\\extensions\\ms-python.debugpy-2024.2.0-win32-x64\\bundled\\libs\\debugpy\\adapter/../..\\debugpy\\launcher' '64413' '--' 'C:\\Python\\Sysint_NPL2SQL\\langchaindemo.py'\n\nAny pointer will help.\nThanks\nYatan"} +{"id": "000401", "text": "I am trying to use LangChain embeddings, using the following code in Google colab:\nThese are the installations:\npip install pypdf\npip install -q transformers einops accelerate langchain bitsandbytes\npip install install sentence_transformers\npip3 install llama-index --upgrade\npip install llama-index-llms-huggingface\nhuggingface-cli login\npip install -U llama-index-core llama-index-llms-openai llama-index-embeddings-openai\n\n\n\nThen I ran this code in the google colab:\nfrom llama_index.core import VectorStoreIndex,SimpleDirectoryReader,ServiceContext\nfrom llama_index.llms.huggingface import HuggingFaceLLM\nfrom llama_index.core.prompts.prompts import SimpleInputPrompt\n\n\n# Reading pdf\ndocuments=SimpleDirectoryReader(\"/content/sample_data/Data\").load_data()\n\n#Prompt\nquery_wrapper_prompt=SimpleInputPrompt(\"<|USER|>{query_str}<|ASSISTANT|>\")\n\nimport torch\nllm = HuggingFaceLLM(\n context_window=4096,\n max_new_tokens=256,\n generate_kwargs={\"temperature\": 0.0, \"do_sample\": False},\n system_prompt=system_prompt,\n query_wrapper_prompt=query_wrapper_prompt,\n tokenizer_name=\"meta-llama/Llama-2-7b-chat-hf\",\n model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n device_map=\"auto\",\n # uncomment this if using CUDA to reduce memory usage\n model_kwargs={\"torch_dtype\": torch.float16 , \"load_in_8bit\":True}\n)\n\n# Embeddings\nfrom langchain.embeddings.huggingface import HuggingFaceEmbeddings\nfrom llama_index.core import ServiceContext\nfrom llama_index.embeddings.langchain import LangchainEmbedding\n\nembed_model=LangchainEmbedding(\n HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-mpnet-base-v2\"))\n\n\n\nThen I got this error:\nModuleNotFoundError: No module named 'llama_index.embeddings.langchain'\nI am using latest version of llama-index\nVersion: 0.10.26\nCan someone suggest, how to resolve this error."} +{"id": "000402", "text": "I am trying to ask GPT 4 to use Wikipedia for a prompt, using agents and tools via LangChain.\nThe difficulty I'm running into is the book I've been using, Developing Apps with GPT-4 and ChatGPT: Build Intelligent Chatbots, Content Generators, and More, while published in 2023, already has code examples that are deprecated.\nFor example, I am trying to do something similar to the code provided on page 114 of that book:\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.agents import load_tools, initialize_agent, AgentType llm = ChatOpenAI(model_name=\"gpt-3.5-turbo\", temperature=0)\ntools = load_tools([\"wikipedia\", \"llm-math\"], llm=llm)\nagent = initialize_agent(\ntools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True )\n question = \"\"\"What is the square root of the population of the capital of the\n Country where the Olympic Games were held in 2016?\"\"\"\n agent.run(question)\n\nI see much of this is deprecated (e.g., initialize_agent), so I have looked around StackOverflow, GitHub, and the LangChain Python documents to come up with this:\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.output_parsers import StrOutputParser\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain.agents import (\n load_tools, create_structured_chat_agent, AgentExecutor\n)\n\nmodel = ChatOpenAI(model=\"gpt-4\", temperature=0)\ntools = load_tools([\"wikipedia\"])\nprompt = ChatPromptTemplate.from_template(\n \"\"\"\n You are a research assistant, and your job is to retrieve information about\n movies and movie directors.\n \n Use the following tool: {tools}\n \n Use the following format:\n\n Question: the input question you must answer\n Thought: you should always think about what to do\n Action: the action to take, should be one of [{tool_names}]\n Action Input: the input to the action\n Observation: the result of the action\n ... (this Thought/Action/Action Input/Observation can repeat N times)\n Thought: I now know the final answer\n Final Answer: the final answer to the original input question. You only\n need to give the number, no other information or explanation is necessary.\n\n Begin!\n\n Question: How many movies did the director of the {year} movie {name} direct\n before they made {name}?\n Thought: {agent_scratchpad}\n \"\"\"\n)\nagent = create_structured_chat_agent(model, tools, prompt)\nagent_executor = AgentExecutor(agent=agent, tools=tools)\nagent_executor.invoke({\"year\": \"1991\", \"name\": \"thelma and louise\"})\n\nI'm going to be running this through a loop of many movies, so I'd like it to only return one integer (in this case, 6). But it seems like I need to give it that full thought process prompt; I can't get it to run if I don't include {tools}, {tool_names}, and {agent_scratchpad} in the prompt (per this GitHub post).\nThe frustrating thing is I eventually do get the correct answer, but note that it is throwing an error:\nValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: First, I need to find out who directed the movie \"Thelma and Louise\" in 1991. \n Action: wikipedia\n Action Input: {'query': 'Thelma and Louise'}\n Observation: \n \"Thelma & Louise\" is a 1991 American female buddy road film directed by Ridley Scott and written by Callie Khouri. It stars Geena Davis as Thelma and Susan Sarandon as Louise, two friends who embark on a road trip with unforeseen consequences. The film became a critical and commercial success, receiving six Academy Award nominations and winning one for Best Original Screenplay for Khouri. Scott was nominated for Best Director.\n Thought: \n Ridley Scott directed the movie \"Thelma and Louise\". Now I need to find out how many movies he directed before this one.\n Action: wikipedia\n Action Input: {'query': 'Ridley Scott filmography'}\n Observation: \n Ridley Scott is an English filmmaker. Following his commercial breakthrough with the science fiction horror film Alien (1979), his best known works are the neo-noir dystopian science fiction film Blade Runner (1982), historical drama Gladiator (2000), and science fiction film The Martian (2015). Scott has directed more than 25 films and is known for his atmospheric, highly concentrated visual style. His films are also known for their strong female characters. Here is a list of his films before \"Thelma & Louise\": \n 1. The Duellists (1977)\n 2. Alien (1979)\n 3. Blade Runner (1982)\n 4. Legend (1985)\n 5. Someone to Watch Over Me (1987)\n 6. Black Rain (1989)\n Thought: \n Ridley Scott directed six movies before \"Thelma and Louise\".\n Final Answer: 6\n\nThis seems to be very common (here, and here, and also here, and lastly here).\nSo, I do what it tells me (see docs also) and update my AgentExecutor to:\nagent_executor = AgentExecutor(\n agent=agent, \n tools=tools,\n handle_parsing_errors=True\n)\n\nAnd that returns:\n{'year': '1991', 'name': 'thelma and louise', 'output': 'Agent stopped due to iteration limit or time limit.'}\n\nMy question: How can I use LangChain to combine GPT 4 and Wikipedia to get an answer to a query, when all I want back is an integer?"} +{"id": "000403", "text": "I use this command 'from langchain.document_loaders import TextLoader' for import TextLoader. It used to work but now it is ERROR. It shows 'Error: No module named 'pydantic_v1.class_validators'; 'pydantic_v1' is not a package' Anyone know how to fix it ? please !! Using Langchain ==> langchain==0.0.266\nenter image description here"} +{"id": "000404", "text": "I am trying to make a private llm with RAG capabilities. I successfully followed a few tutorials and made one. But I wish to view the context the MultiVectorRetriever retriever used when langchain invokes my query.\nThis is my code:\nfrom langchain_core.output_parsers import StrOutputParser\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain.retrievers.multi_vector import MultiVectorRetriever\nfrom langchain.storage import InMemoryStore\nfrom langchain_community.chat_models import ChatOllama\nfrom langchain_community.embeddings import GPT4AllEmbeddings\nfrom langchain_community.vectorstores import Chroma\nfrom langchain_core.documents import Document\nfrom langchain_core.runnables import RunnablePassthrough\nfrom PIL import Image\nimport io\nimport os\nimport uuid\nimport json\nimport base64\n\ndef convert_bytes_to_base64(image_bytes):\n encoded_string= base64.b64encode(image_bytes).decode(\"utf-8\")\n return \"data:image/jpeg;base64,\" + encoded_string\n\n#Load Retriever\n\npath=\"./vectorstore/pdf_test_file.pdf\"\n\n#Load from JSON files\ntexts = json.load(open(os.path.join(path, \"json\", \"texts.json\")))\ntext_summaries = json.load(open(os.path.join(path, \"json\", \"text_summaries.json\")))\ntables = json.load(open(os.path.join(path, \"json\", \"tables.json\")))\ntable_summaries = json.load(open(os.path.join(path, \"json\", \"table_summaries.json\")))\nimg_summaries = json.load(open(os.path.join(path, \"json\", \"img_summaries.json\")))\n\n#Load from figures\nimages_base64_list = []\nfor image in (os.listdir(os.path.join(path, \"figures\"))):\n \n img = Image.open(os.path.join(path, \"figures\",image))\n buffered = io.BytesIO()\n img.save(buffered,format=\"png\")\n image_base64 = convert_bytes_to_base64(buffered.getvalue())\n #Warning: this section of the code does not support external IDEs like spyder and will break. Run it loccally in the native terminal\n images_base64_list.append(image_base64)\n\n\n#Add to vectorstore\n\n# The vectorstore to use to index the child chunks\nvectorstore = Chroma(\n collection_name=\"summaries\", embedding_function=GPT4AllEmbeddings()\n)\n\n# The storage layer for the parent documents\nstore = InMemoryStore() # <- Can we extend this to images\nid_key = \"doc_id\"\n\n# The retriever (empty to start)\nretriever = MultiVectorRetriever(\n vectorstore=vectorstore,\n docstore=store,\n id_key=id_key,\n)\n\n# Add texts\ndoc_ids = [str(uuid.uuid4()) for _ in texts]\nsummary_texts = [\n Document(page_content=s, metadata={id_key: doc_ids[i]})\n for i, s in enumerate(text_summaries)\n]\nretriever.vectorstore.add_documents(summary_texts)\nretriever.docstore.mset(list(zip(doc_ids, texts)))\n\n# Add tables\ntable_ids = [str(uuid.uuid4()) for _ in tables]\nsummary_tables = [\n Document(page_content=s, metadata={id_key: table_ids[i]})\n for i, s in enumerate(table_summaries)\n]\nretriever.vectorstore.add_documents(summary_tables)\nretriever.docstore.mset(list(zip(table_ids, tables)))\n\n# Add images\nimg_ids = [str(uuid.uuid4()) for _ in img_summaries]\nsummary_img = [\n Document(page_content=s, metadata={id_key: img_ids[i]})\n for i, s in enumerate(img_summaries)\n]\nretriever.vectorstore.add_documents(summary_img)\nretriever.docstore.mset(\n list(zip(img_ids, img_summaries))\n) # Store the image summary as the raw document\n\n\nimg_summaries_ids_and_images_base64=[]\ncount=0\nfor img in images_base64_list:\n new_summary = [img_ids[count],img]\n img_summaries_ids_and_images_base64.append(new_summary)\n count+=1\n\n\n\n# Check Response\n\n# Question Example: \"What is the issues plagueing the acres?\"\n\n\"\"\"\nTesting Retrival\n\nprint(\"\\nTesting Retrival: \\n\")\nprompt = \"Images / figures with playful and creative examples\"\nresponce = retriever.get_relevant_documents(prompt)[0]\nprint(responce)\n\n\"\"\"\n\n\"\"\"\nretriever.vectorstore.similarity_search(\"What is the issues plagueing the acres? show any relevant tables\",k=10)\n\"\"\"\n\n# Prompt template\ntemplate = \"\"\"Answer the question based only on the following context, which can include text, tables and images/figures:\n{context}\nQuestion: {question}\n\"\"\"\n\nprompt = ChatPromptTemplate.from_template(template)\n\n# Multi-modal LLM\n# model = LLaVA\nmodel = ChatOllama(model=\"custom-mistral\")\n\n# RAG pipeline\nchain = (\n {\"context\": retriever, \"question\": RunnablePassthrough()}\n | prompt\n | model\n | StrOutputParser()\n)\n\nprint(\"\\n\\n\\nTesting Responce: \\n\")\n\nprint(chain.invoke(\n \"What is the issues plagueing the acres? show any relevant tables\"\n))\n\nThe output will look something like this:\n\nTesting Responce:\n\nIn the provided text, the main issue with acres is related to wildfires and their impact on various lands and properties. The text discusses the number of fires, acreage burned, and the level of destruction caused by wildfires in the United States from 2018 to 2022. It also highlights that most wildfires are human-caused (89% of the average number of wildfires from 2018 to 2022) and that fires caused by lightning tend to be slightly larger and burn more acreage than those caused by humans.\n\nHere's the table provided in the text, which shows the number of fires and acres burned on federal lands (by different organizations), other non-federal lands, and total:\n\n| Year | Number of Fires (thousands) | Acres Burned (millions) |\n|------|-----------------------------|--------------------------|\n| 2018 | 58.1 | 8.8 |\n| 2019 | 58.1 | 4.7 |\n| 2020 | 58.1 | 10.1 |\n| 2021 | 58.1 | 10.1 |\n| 2022 | 58.1 | 3.6 |\n\nThe table also breaks down the acreage burned by federal lands (DOI and FS) and other non-federal lands, as well as showing the total acreage burned each year.<|im_end|>\n\nFrom the RAG pipline i wish to print out the the context used from the retriever which stores tons of vector embeddings. i wish to know which ones it uses for the query. something like :\nchain.invoke(\"What is the issues plagueing the acres? show any relevant tables\").get_context_used()\n\ni know there are functions like\nretriever.get_relevant_documents(prompt) \n\nand\nretriever.vectorstore.similarity_search(prompt) \n\nwhich provides the most relevant context to the query but I'm unsure whether the invoke function pulls the same context with the other 2 functions.\nthe Retriver Im using from Langchain is the MultiVectorRetriever"} +{"id": "000405", "text": "I have two questions:\n\nHow could I change the distance metric directly in the function similarity_search. Because by default the function similarity_search uses euclidean distance and I want e.g. cosine. Ho could I do that?\n\nfrom eurelis_langchain_solr_vectorstore import Solr\n\nembeddings_model = OpenAIEmbeddings(model=\"bge-small-en\")\n\nvector_store = Solr(embeddings_model, core_kwargs={\n 'page_content_field': 'content', # field containing the text content\n 'vector_field': 'content_vec', # field containing the embeddings of the text content\n 'core_name': 'default', # core name\n 'url_base': 'http://localhost:8983/solr' # base url to access solr\n})\n\n# here I want to use cosine distance metric\nvector_store.similarity_search(\"relevant question\", k=5)\n\n\n\nHow could I change the distance metric directly in as_retriever?\n\n# here I want to use cosine distance metric\nretriever = vector_store.as_retriever(search_kwargs={'k': 5})"} +{"id": "000406", "text": "I query a collection in a zilliz milvus db like this:\ndocuments = vector_store.similarity_search_with_score(query)\n\nThe query is successful but in line 777 of milvus.py the value result.full_length is retrieved, which is not available:\nfor result in res[0]:\n data = {x: result.entity.get(x) for x in output_fields}\n doc = self._parse_document(data)\n pair = (doc, result.full_length)\n ret.append(pair)\n\nwhich then leads to this exception\nFile \"/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py\", line 644, in similarity_search\n res = self.similarity_search_with_score(\n File \"/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py\", line 717, in similarity_search_with_score\n res = self.similarity_search_with_score_by_vector(\n File \"/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py\", line 777, in similarity_search_with_score_by_vector\n pair = (doc, result.full_length)\n File \"/Users/tilman/LangchainCorsera/venv/lib/python3.9/site-packages/pymilvus/client/abstract.py\", line 588, in __getattr__\n raise MilvusException(message=f\"Field {item} is not in the hit entity\")\npymilvus.exceptions.MilvusException: \n\nAny clues?"} +{"id": "000407", "text": "I am trying to build a Chat PDF application using langchain,\nDuring this I installed all the necessary packages, but there is one issue with this chromadb, which no matter what I do, it keeps showing the error.\nI installed it, ran it many times, but I keep getting this error asking to install chromadb and\nhere is the screenshot of the error\nrepo link\nI tried uninstalling and installing again, GPTed, saw issues in Github but nothing seems to help me fix the issue"} +{"id": "000408", "text": "I have the code:\nloader = PyPDFLoader(\u201chttps://arxiv.org/pdf/2303.08774.pdf\u201d)\ndata = loader.load()\ndocs = text_splitter1.split_documents(data)\nvector_search_index = \u201cvector_index\u201d\n\nvector_search = MongoDBAtlasVectorSearch.from_documents(\n documents=docs,\n embedding=OpenAIEmbeddings(disallowed_special=()),\n collection=atlas_collection,\n index_name=vector_search_index,\n)\n\nquery = \"What were the compute requirements for training GPT 4\"\nresults = vector_search1.similarity_search(query)\nprint(\"result: \", results)\n\nAnd in results I have every time only empty array. I don't understand what I do wrong. This is the link on the langchain documentation with examples. Information is saved normally in database, but I cannot search info in this collection."} +{"id": "000409", "text": "I'm trying to hop onto the LCEL and Langserve train but I'm having a little trouble understanding a bit of the \"magic\" involved with accessing variables set in the pipeline's dictionary.\nThe variables appear to be resolvable from prompt templates. I'd like to retrieve these values in custom functions, etc. but it's not clear to me how to access them directly. Take the following contrived example which aims to return the source document as well as the answer in the response:\nclass ChatResponse(BaseModel):\n answer: str\n sources: List[Document]\n\nstore = FAISS.from_texts(\n [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n)\nretriever = store.as_retriever()\ntemplate = \"\"\"Answer the question based only on the following context:\n{context}\n\nQuestion: {question}\n\"\"\"\nprompt = ChatPromptTemplate.from_template(template)\nllm = ChatOpenAI()\n\ndef format_response(answer):\n sources = [] # TODO lookup source documents (key: 'context')\n return ChatResponse(answer=answer, sources=sources)\n\nretrieval_chain = (\n {\"context\": retriever, \"question\": RunnablePassthrough()}\n | prompt\n | llm\n | StrOutputParser()\n | RunnableLambda(format_response)\n)\napp = FastAPI()\nadd_routes(app, retrieval_chain, path=\"/chat\", input_type=str, output_type=ChatResponse)\n\nIn format_response, I've left a TODO to lookup the source documents. I'd like to retrieve the source documents from the pipeline's context key. How would I access this key that was set from the first step of the chain?"} +{"id": "000410", "text": "I have been doing a POC to implement RAG driven model for my AI/ML use case.\nThe use case is to \"Find Similar and duplicate controls by comparing each ID with every other ID, Generate similarity scores and list the pairs which exceeds a threshold of 80-87 for similar controls and exceeding above 95 for duplicate controls\"\nThe code snippet is :\nloader = CSVLoader(file_path=\"control.csv\")\ndata = loader.load()\ntext_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=50)\nchunks = text_splitter.split_documents(data)\nvectorstore = Chroma.from_documents(documents=chunks, embedding=OpenAIEmbeddings())\nretriever = vectorstore.as_retriever()\ntemplate = \"\"\"You are an assistant for question-answering tasks.\nUse the following pieces of retrieved context to answer the question.\nIf you don't know the answer, just say that you don't know.\nUse three sentences maximum and keep the answer concise.\nQuestion: {question}\nContext: {context}\nAnswer:\n\"\"\"\nprompt = ChatPromptTemplate.from_template(template)\nllm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\",verbose=True)\nrag_chain = ( {\"context\": retriever, \"question\": RunnablePassthrough()} | prompt | llm | StrOutputParser() )\nquery = \"FInd Similar controls by comparing each ID with every other ID in the document, combining their Name and Description. Calculate similarity scores between them and list all the pairs that is exceeding a threshold of 80-87for similar controls and above 95 for duplicate controls.\"\nrag_chain.invoke(query)\nThe output i got was :\n1. There are a total of 6 controls formed by comparing each ID with every other ID in the document. The similarity scores between them can be calculated and pairs exceeding a threshold of 80 can be listed in the output.\n2. I don't Know\nMy expected outcome is to print the list of Similar and Duplicate pairs from the data , it has around 3500+ data.\nBut i dont find to see the expected output here ? Iam not sure where am wrong. Also would like to know if i have mentioned the right prompt for the scenario.\nAlso, I have tried the same prompt where i have not implemented RAG , but i could proper results , it just a connection made with Langchain and OpenAI for interaction.\nI would like to know where am wrong and what needs to be corrected in order to get the right expected outcome."} +{"id": "000411", "text": "I am working on a langchain based SQL chat application and wanted my agent to understand context w.r.t the user session. For e.g.\nUser - What is highest order placed in last placed?\nBot - Order id : XYZ\nUser - When was this placed?\nHere, bot should be able to deduce that 'this' refers to 'order id XYZ' from previous question. How can I incorporate this in my code?\nI am tried using ChatHistory but getting context from session history is where I am stuck."} +{"id": "000412", "text": "I'm going to learn LangChain and stumble upon their Getting Started section. Because it doesn't work and I'm curious if I am the only person where LangChain examples don't work.\nThis is their tutorial I am talking about. https://python.langchain.com/docs/get_started/quickstart/\nLet's use the very first example:\nllm = ChatOpenAI(openai_api_key=api_key)\nllm.invoke(\"how can langsmith help with testing?\")\n\nI wrote some initializing code as well to make ChatOpenAI work:\nimport os\nfrom langchain_openai import ChatOpenAI\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\nllm = ChatOpenAI(openai_api_key=api_key)\nllm.invoke(\"how can langsmith help with testing?\")\n\nThe invoke function seems to be executed as I can't see any error message. But I also can't see any further output. Nothing happens.\nThey even wrote \"We can also guide its response with a prompt template.\". However, there is not response.\nWho can explain to me, what is happening here? And can you probably recommend me a better tutorial instead of that from LangChain?"} +{"id": "000413", "text": "I have a database in which I have connected an agent too. However, I have noticed that it sometimes gets confused between whether or not it should return a column ID or persons first name when asked \"which person sold the most....?\" Is there a way to tune/adjust the create_sql_agent from langchain.agents at which I can tell the agent to not return column ID but return first and last name based on questions structured like that?\nI think the question may be related to this post but I am unsure how to include that/and structure that properly: https://github.com/langchain-ai/langchain/discussions/9591\nSystem Info\nlangchain-openai==0.1.3\nPython 3.11.7\nWindows 11\nBasic Model\nfrom langchain_openai import ChatOpenAI\nfrom langchain.agents.agent_toolkits import SQLDatabaseToolkit\nfrom langchain.agents import create_sql_agent\nfrom langchain.sql_database import SQLDatabase\n\n\nllm = ChatOpenAI(model_name=\"gpt-3.5-turbo-1106\", temperature=0, openai_api_key=os.environ.get('OPENAI_API_KEY'))\n\ntoolkit = SQLDatabaseToolkit(db=db, llm=llm)\n\nagent_executor = create_sql_agent(\n llm=llm,\n toolkit=toolkit,\n verbose=False,\n agent_type=\"openai-tools\")\n\n\nprint(agent_executor.invoke(\"What is my data about\"))\n\nNothing, unsure how to progress as I can not find examples."} +{"id": "000414", "text": "Currently, I am getting back multiple responses, or the model doesn't know when to end a response, and it seems to repeat the system prompt in the response(?). I simply want to get a single response back. My setup is very simple, so I imagine I am missing implementation details, but what can I do to only return the single response?\nfrom langchain_community.llms import Ollama\n\nllm = Ollama(model=\"llama3\")\n\ndef get_model_response(user_prompt, system_prompt):\n prompt = f\"\"\"\n <|begin_of_text|>\n <|start_header_id|>system<|end_header_id|>\n { system_prompt }\n <|eot_id|>\n <|start_header_id|>user<|end_header_id|>\n { user_prompt }\n <|eot_id|>\n <|start_header_id|>assistant<|end_header_id|>\n \"\"\"\n response = llm.invoke(prompt)\n return response"} +{"id": "000415", "text": "I have put together a script that works just fine using OpenAI api. I am now trying to switch it over to AzureOpenAI yet it seems I am running into an issue with the create_sql_agent(). Can you use create_sql_agent with AzureOpenAI model gpt-35-turbo-1106? Could it be an issue with my api_version within AzureOpenAI()? The error I receive is \"TypeError: Completions. create() got an unexpected keyword argument 'tools'\" which I think could also be the option using 'openai-tools' as my agent_type?\nCode\nimport os\nfrom langchain_openai import AzureOpenAI\nfrom langchain.agents import create_sql_agent\nfrom langchain.agents.agent_toolkits import SQLDatabaseToolkit\nfrom langchain.sql_database import SQLDatabase\nfrom dotenv import load_dotenv\nfrom langchain.agents import AgentExecutor\n\nfrom langchain_core.prompts.chat import (\n ChatPromptTemplate,\n HumanMessagePromptTemplate,\n SystemMessagePromptTemplate,\n AIMessagePromptTemplate,\n MessagesPlaceholder,\n)\n\npath = (os.getcwd()+'\\creds.env')\n\nload_dotenv(path) \n\ndb = SQLDatabase.from_uri(\n f\"postgresql://{os.environ.get('user')}:{os.environ.get('password')}@{os.environ.get('host')}:{os.environ.get('port')}/{os.environ.get('database')}\")\n\nllm = AzureOpenAI(azure_endpoint=MY_ENDPOINT,\n deployment_name=MY_DEPLOYMENT_NAME,\n model_name='gpt-35-turbo', # should it be 'gpt-35-turbo-1106'?\n temperature = 0,\n api_key = MY_KEY,\n api_version = '2023-07-01-preview') #my api_version correct? Uncertain which one\n\ntoolkit = SQLDatabaseToolkit(db=db, llm=llm)\n\nprefix = \"\"\"\nYou are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double-check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP, CASCADE, etc.) to the database.\n\nIf the question does not seem related to the database, just return \"I don't know\" as the answer.\n\nIf asked about a person do not return an 'ID' but return a first name and last name.\n\n\"\"\"\n\nsuffix = \"\"\" I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables.\n\"\"\"\n\nmessages = [\n SystemMessagePromptTemplate.from_template(prefix),\n HumanMessagePromptTemplate.from_template(\"{input}\"),\n AIMessagePromptTemplate.from_template(suffix),\n MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n ]\n\n\nagent_executor = create_sql_agent(llm,\n toolkit=toolkit,\n agent_type='openai-tools', #does this work with azure?\n prompt=prompt,\n verbose=False)\n\n\nprint(agent_executor.invoke(\"What are the names of the tables\"))\n\nError\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\nCell In[69], line 1\n----> 1 print(agent_executor.invoke(\"What are the names of the tables\"))\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\chains\\base.py:163, in Chain.invoke(self, input, config, **kwargs)\n 161 except BaseException as e:\n 162 run_manager.on_chain_error(e)\n--> 163 raise e\n 164 run_manager.on_chain_end(outputs)\n 166 if include_run_info:\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\chains\\base.py:153, in Chain.invoke(self, input, config, **kwargs)\n 150 try:\n 151 self._validate_inputs(inputs)\n 152 outputs = (\n--> 153 self._call(inputs, run_manager=run_manager)\n 154 if new_arg_supported\n 155 else self._call(inputs)\n 156 )\n 158 final_outputs: Dict[str, Any] = self.prep_outputs(\n 159 inputs, outputs, return_only_outputs\n 160 )\n 161 except BaseException as e:\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\agents\\agent.py:1432, in AgentExecutor._call(self, inputs, run_manager)\n 1430 # We now enter the agent loop (until it returns something).\n 1431 while self._should_continue(iterations, time_elapsed):\n-> 1432 next_step_output = self._take_next_step(\n 1433 name_to_tool_map,\n 1434 color_mapping,\n 1435 inputs,\n 1436 intermediate_steps,\n 1437 run_manager=run_manager,\n 1438 )\n 1439 if isinstance(next_step_output, AgentFinish):\n 1440 return self._return(\n 1441 next_step_output, intermediate_steps, run_manager=run_manager\n 1442 )\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\agents\\agent.py:1138, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\n 1129 def _take_next_step(\n 1130 self,\n 1131 name_to_tool_map: Dict[str, BaseTool],\n (...)\n 1135 run_manager: Optional[CallbackManagerForChainRun] = None,\n 1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n 1137 return self._consume_next_step(\n-> 1138 [\n 1139 a\n 1140 for a in self._iter_next_step(\n 1141 name_to_tool_map,\n 1142 color_mapping,\n 1143 inputs,\n 1144 intermediate_steps,\n 1145 run_manager,\n 1146 )\n 1147 ]\n 1148 )\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\agents\\agent.py:1138, in (.0)\n 1129 def _take_next_step(\n 1130 self,\n 1131 name_to_tool_map: Dict[str, BaseTool],\n (...)\n 1135 run_manager: Optional[CallbackManagerForChainRun] = None,\n 1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:\n 1137 return self._consume_next_step(\n-> 1138 [\n 1139 a\n 1140 for a in self._iter_next_step(\n 1141 name_to_tool_map,\n 1142 color_mapping,\n 1143 inputs,\n 1144 intermediate_steps,\n 1145 run_manager,\n 1146 )\n 1147 ]\n 1148 )\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\agents\\agent.py:1166, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\n 1163 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)\n 1165 # Call the LLM to see what to do.\n-> 1166 output = self.agent.plan(\n 1167 intermediate_steps,\n 1168 callbacks=run_manager.get_child() if run_manager else None,\n 1169 **inputs,\n 1170 )\n 1171 except OutputParserException as e:\n 1172 if isinstance(self.handle_parsing_errors, bool):\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain\\agents\\agent.py:514, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)\n 506 final_output: Any = None\n 507 if self.stream_runnable:\n 508 # Use streaming to make sure that the underlying LLM is invoked in a\n 509 # streaming\n (...)\n 512 # Because the response from the plan is not a generator, we need to\n 513 # accumulate the output into final output and return that.\n--> 514 for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}):\n 515 if final_output is None:\n 516 final_output = chunk\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2875, in RunnableSequence.stream(self, input, config, **kwargs)\n 2869 def stream(\n 2870 self,\n 2871 input: Input,\n 2872 config: Optional[RunnableConfig] = None,\n 2873 **kwargs: Optional[Any],\n 2874 ) -> Iterator[Output]:\n-> 2875 yield from self.transform(iter([input]), config, **kwargs)\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2862, in RunnableSequence.transform(self, input, config, **kwargs)\n 2856 def transform(\n 2857 self,\n 2858 input: Iterator[Input],\n 2859 config: Optional[RunnableConfig] = None,\n 2860 **kwargs: Optional[Any],\n 2861 ) -> Iterator[Output]:\n-> 2862 yield from self._transform_stream_with_config(\n 2863 input,\n 2864 self._transform,\n 2865 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),\n 2866 **kwargs,\n 2867 )\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1880, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)\n 1878 try:\n 1879 while True:\n-> 1880 chunk: Output = context.run(next, iterator) # type: ignore\n 1881 yield chunk\n 1882 if final_output_supported:\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2826, in RunnableSequence._transform(self, input, run_manager, config)\n 2817 for step in steps:\n 2818 final_pipeline = step.transform(\n 2819 final_pipeline,\n 2820 patch_config(\n (...)\n 2823 ),\n 2824 )\n-> 2826 for output in final_pipeline:\n 2827 yield output\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1283, in Runnable.transform(self, input, config, **kwargs)\n 1280 final: Input\n 1281 got_first_val = False\n-> 1283 for chunk in input:\n 1284 if not got_first_val:\n 1285 final = adapt_first_streaming_chunk(chunk) # type: ignore\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\runnables\\base.py:4728, in RunnableBindingBase.transform(self, input, config, **kwargs)\n 4722 def transform(\n 4723 self,\n 4724 input: Iterator[Input],\n 4725 config: Optional[RunnableConfig] = None,\n 4726 **kwargs: Any,\n 4727 ) -> Iterator[Output]:\n-> 4728 yield from self.bound.transform(\n 4729 input,\n 4730 self._merge_configs(config),\n 4731 **{**self.kwargs, **kwargs},\n 4732 )\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1300, in Runnable.transform(self, input, config, **kwargs)\n 1293 raise TypeError(\n 1294 f\"Failed while trying to add together \"\n 1295 f\"type {type(final)} and {type(chunk)}.\"\n 1296 f\"These types should be addable for transform to work.\"\n 1297 )\n 1299 if got_first_val:\n-> 1300 yield from self.stream(final, config, **kwargs)\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\language_models\\llms.py:458, in BaseLLM.stream(self, input, config, stop, **kwargs)\n 451 except BaseException as e:\n 452 run_manager.on_llm_error(\n 453 e,\n 454 response=LLMResult(\n 455 generations=[[generation]] if generation else []\n 456 ),\n 457 )\n--> 458 raise e\n 459 else:\n 460 run_manager.on_llm_end(LLMResult(generations=[[generation]]))\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_core\\language_models\\llms.py:442, in BaseLLM.stream(self, input, config, stop, **kwargs)\n 440 generation: Optional[GenerationChunk] = None\n 441 try:\n--> 442 for chunk in self._stream(\n 443 prompt, stop=stop, run_manager=run_manager, **kwargs\n 444 ):\n 445 yield chunk.text\n 446 if generation is None:\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\langchain_openai\\llms\\base.py:262, in BaseOpenAI._stream(self, prompt, stop, run_manager, **kwargs)\n 260 params = {**self._invocation_params, **kwargs, \"stream\": True}\n 261 self.get_sub_prompts(params, [prompt], stop) # this mutates params\n--> 262 for stream_resp in self.client.create(prompt=prompt, **params):\n 263 if not isinstance(stream_resp, dict):\n 264 stream_resp = stream_resp.model_dump()\n\nFile ~\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\openai\\_utils\\_utils.py:277, in required_args..inner..wrapper(*args, **kwargs)\n 275 msg = f\"Missing required argument: {quote(missing[0])}\"\n 276 raise TypeError(msg)\n--> 277 return func(*args, **kwargs)\n\nTypeError: Completions.create() got an unexpected keyword argument 'tools'"} +{"id": "000416", "text": "I'm working with the langchain library to implement a document analysis application. Especifically I want to use the routing technique described in this documentation. i wanted to follow along the example, but my environment is restricted to AWS, and I am using ChatBedrock instead of ChatOpenAI due to limitations with my deployment.\nAccording to this overview the with_structured_output method, which I need, is not (yet) implemented for models on AWS Bedrock, which is why I am looking for a workaround or any method to replicate this functionality.\nThe key functionality I am looking for is shown in this example:\nfrom typing import List\nfrom typing import Literal\n\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.pydantic_v1 import BaseModel, Field\nfrom langchain_openai import ChatOpenAI\n\n\n\nclass RouteQuery(BaseModel):\n \"\"\"Route a user query to the most relevant datasource.\"\"\"\n\n datasources: List[Literal[\"python_docs\", \"js_docs\", \"golang_docs\"]] = Field(\n ...,\n description=\"Given a user question choose which datasources would be most relevant for answering their question\",\n )\n\nsystem = \"\"\"You are an expert at routing a user question to the appropriate data source.\n\nBased on the programming language the question is referring to, route it to the relevant data source.\"\"\"\nprompt = ChatPromptTemplate.from_messages(\n [\n (\"system\", system),\n (\"human\", \"{question}\"),\n ]\n)\n\nllm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\nstructured_llm = llm.with_structured_output(RouteQuery)\nrouter = prompt | structured_llm\nrouter.invoke(\n {\n \"question\": \"is there feature parity between the Python and JS implementations of OpenAI chat models\"\n }\n)\n\nThe output would be:\nRouteQuery(datasources=['python_docs', 'js_docs'])\n\nThe most important fact for me is that it just selects items from the list without any additional overhead, which makes it possible to setup the right follow up questions.\nDid anyone find a workaround how to resolve this issue?"} +{"id": "000417", "text": "I'm very new to LangChain, and I'm working with around 100-150 HTML files on my local disk that I need to upload to a server for NLP model training. However, I have to divide my information into chunks because each file is only permitted to have a maximum of 20K characters. I'm trying to use the LangChain library to do so, but I'm not being successful in splitting my files into my desired chunks.\nFor reference, I'm using this URL: http://www.hadoopadmin.co.in/faq/ Saved locally as HTML only.\nIt's a Hadoop FAQ page that I've downloaded as an HTML file onto my PC. There are many questions and answers there. I've noticed that sometimes, for some files, it gets split by a mere title, and another split is the paragraph following that title. But my desired output would be to have the title and the specific paragraph or following text from the body of the page, and as metadata, the title of the page.\nI'm using this code:\nfrom langchain_community.document_loaders import UnstructuredHTMLLoader\nfrom langchain_text_splitters import HTMLHeaderTextSplitter\n# Same Example with the URL http://www.hadoopadmin.co.in/faq/ Saved Locally as HTML Only\ndir_html_file='FAQ \u2013 BigData.html'\n\ndata_html = UnstructuredHTMLLoader(dir_html_file).load()\n\nheaders_to_split_on = [\n (\"h1\", \"Header 1\")]\nhtml_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)\nhtml_header_splits = html_splitter.split_text(str(data_html))\n\nBut is returning a bunch of weird characters and not splitting the document at all.\nThis is an output:\n[Document(page_content='[Document(page_content=\\'BigData\\\\n\\\\n\"You can have data without information, but you cannot have information without Big data.\"\\\\n\\\\nsaurabhmcakiet@gmail.com\\\\n\\\\n+91-8147644946\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n\\\\nToggle navigation\\\\n\\\\nHome\\\\n\\\\nBigData\\\\n\\\\n\\\\tOverview of BigData\\\\n\\\\tSources of BigData\\\\n\\\\tPros & Cons of BigData\\\\n\\\\tSolutions of BigData\\\\n\\\\nHadoop Admin\\\\n\\\\n\\\\tHadoop\\\\n\\\\t\\\\n\\\\t\\\\tOverview of HDFS\\\\n\\\\t\\\\tOverview of MapReduce\\\\n\\\\t\\\\tApache YARN\\\\n\\\\t\\\\tHadoop Architecture\\\\n\\\\t\\\\n\\\\n\\\\tPlanning of Hadoop Cluster\\\\n\\\\tAdministration and Maintenance\\\\n\\\\tHadoop Ecosystem\\\\n\\\\tSetup HDP cluster from scratch\\\\n\\\\tInstallation and Configuration\\\\n\\\\tAdvanced Cluster Configuration\\\\n\\\\tOverview of Ranger\\\\n\\\\tKerberos\\\\n\\\\t\\\\n\\\\t\\\\tInstalling kerberos/Configuring the KDC and Enabling Kerberos Security\\\\n\\\\t\\\\tConfigure SPNEGO Authentication for Hadoop\\\\n\\\\t\\\\tDisabled kerberos via ambari\\\\n\\\\t\\\\tCommon issues after Disabling kerberos via Ambari\\\\n\\\\t\\\\tEnable https for ambari Server\\\\n\\\\t\\\\tEnable SSL or HTTPS for Oozie Web UI\\\\n\\\\nHadoop Dev\\\\n\\\\n\\\\tSolr\\\\n\\\\t\\\\n\\\\t\\\\tSolr Installation\\\\n\\\\t\\\\tCommits and Optimizing in Solr and its use for NRT\\\\n\\\\t\\\\tSolr FAQ\\\\n\\\\t\\\\n\\\\n\\\\tApache Kafka\\\\n\\\\t\\\\n\\\\t\\\\tKafka QuickStart\\\\n\\\\t\\\\n\\\\n\\\\tGet last access time of hdfs files\\\\n\\\\tProcess hdfs data with Java\\\\n\\\\tProcess hdfs data with Pig\\\\n\\\\tProcess hdfs data with Hive\\\\n\\\\tProcess hdfs data with Sqoop/Flume\\\\n\\\\nBigData Architect\\\\n\\\\n\\\\tSolution Vs Enterprise Vs Technical Architect\u2019s Role and Responsibilities\\\\n\\\\tSolution architect certification\\\\n\\\\nAbout me\\\\n\\\\nFAQ\\\\n\\\\nAsk Questions\\\\n\\\\nFAQ\\\\n\\\\nHome\\\\n\\\\nFAQ\\\\n\\\\nFrequently\\\\xa0Asked Questions about Big Data\\\\n\\\\nMany questions about big data have yet to be answered in a vendor-neutral way. With so many definitions, opinions run the gamut. Here I will attempt to cut to the heart of the matter by addressing some key questions I often get from readers, clients and industry analysts.\\\\n\\\\n1) What is Big Data?\\\\n\\\\n1) What is Big Data?\\\\n\\\\nBig data\u201d is an all-inclusive term used to describe vast amounts of information. In contrast to traditional structured data which is typically stored in a relational database, big data varies in terms of volume, velocity, and variety.\\\\n\\\\nBig data\\\\xa0is characteristically generated in large volumes \u2013 on the order of terabytes or exabytes of data (starts with 1 and has 18 zeros after it, or 1 million terabytes) per individual data set.\\\\n\\\\nBig data\\\\xa0is also generated with high velocity \u2013 it is collected at frequent intervals \u2013 which makes it difficult to analyze (though analyzing it rapidly makes it more valuable).\\\\n\\\\nOr in simple words we can say \u201cBig Data includes data sets whose size is beyond the ability of traditional software tools to capture, manage, and process the data in a reasonable time.\u201d\\\\n\\\\n2) How much data does it take to be called Big Data?\\\\n\\\\nThis question cannot be easily answered absolutely. Based on the infrastructure on the market the lower threshold is at about 1 to 3 terabytes.\\\\n\\\\nBut using Big Data technologies can be sensible for smaller databases as well, for example if complex mathematiccal or statistical analyses are run against a database. Netezza offers about 200 built in functions and computer languages like Revolution R or Phyton which can be used in such cases.\\\\n\\\\\n\nMy Expected output will look something like this:\nOne chunk:\n\nFrequently Asked Questions about Big Data\n\nMany questions about big data have yet to be answered in a vendor-neutral way. With so many definitions, opinions run the gamut. Here I will attempt to cut to the heart of the matter by addressing some key questions I often get from readers, clients and industry analysts.\n\n1) What is Big Data?\n\u201cBig data\u201d is an all-inclusive term used to describe vast amounts of information. In contrast to traditional structured data which is typically stored in a relational database, big data varies in terms of volume, velocity, and variety. Big data is characteristically generated in large volumes \u2013 on the order of terabytes or exabytes of data (starts with 1 and has 18 zeros after it, or 1 million terabytes) per individual data set. Big data is also generated with high velocity \u2013 it is collected at frequent intervals \u2013 which makes it difficult to analyze (though analyzing it rapidly makes it more valuable).\nOr in simple words we can say \u201cBig Data includes data sets whose size is beyond the ability of traditional software tools to capture, manage, and process the data in a reasonable time.\u201d\n2) How much data does it take to be called Big Data?\nThis question cannot be easily answered absolutely. Based on the infrastructure on the market the lower threshold is at about 1 to 3 terabytes.\nBut using Big Data technologies can be sensible for smaller databases as well, for example if complex mathematical or statistical analyses are run against a database. Netezza offers about 200 built in functions and computer languages like Revolution R or Phyton which can be used in such cases.\n\nMetadata: FAQ\n\n\nAnother Chunck\n7) Where is the big data trend going?\nEventually the big data hype will wear off, but studies show that big data adoption will continue to grow. With a projected $16.9B market by 2015 (Wikibon goes even further to say $50B by 2017), it is clear that big data is here to stay. However, the big data talent pool is lagging behind and will need to catch up to the pace of the market. McKinsey & Company estimated in May 2011 that by 2018, the US alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.\nThe emergence of big data analytics has permanently altered many businesses\u2019 way of looking at data. Big data can take companies down a long road of staff, technology, and data storage augmentation, but the payoff \u2013 rapid insight into never-before-examined data \u2013 can be huge. As more use cases come to light over the coming years and technologies mature, big data will undoubtedly reach critical mass and will no longer be labeled a trend. Soon it will simply be another mechanism in the BI ecosystem.\n8) Who are some of the BIG DATA users?\nFrom cloud companies like Amazon to healthcare companies to financial firms, it seems as if everyone is developing a strategy to use big data. For example, every mobile phone user has a monthly bill which catalogs every call and every text; processing the sheer volume of that data can be challenging. Software logs, remote sensing technologies, information-sensing mobile devices all pose a challenge in terms of the volumes of data created. The size of Big Data can be relative to the size of the enterprise. For some, it may be hundreds of gigabytes, for others, tens or hundreds of terabytes to cause consideration.\n9) Data visualization is becoming more popular than ever.\nIn my opinion, it is absolutely essential for organizations to embrace interactive data visualization tools. Blame or thank big data for that and these tools are amazing. They are helping employees make sense of the never-ending stream of data hitting them faster than ever. Our brains respond much better to visuals than rows on a spreadsheet.\nCompanies like Amazon, Apple, Facebook, Google, Twitter, Netflix and many others understand the cardinal need to visualize data. And this goes way beyond Excel charts, graphs or even pivot tables. Companies like Tableau Software have allowed non-technical users to create very interactive and imaginative ways to visually represent information.\n\nMetadata: FAQ \n\nMy thought process is being able to gather all the information and split it into chunks, but I don't want titles without their following paragraphs separated, and I also want as much info as possible (max 20K characters) before creating another chunk.\nI would also like to save these chunks and their meta data. Is there a function in LangChain to do this?\nI am open to hearing not to do this in LangChain for efficiency reasons."} +{"id": "000418", "text": "I am experimenting with a langchain chain by passing multiple arguments.\nHere is a scenario:\nTEMPLATE = \"\"\"Task: Generate Cypher statement to query a graph database.\nInstructions:\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\n\nYou are also provided with contexts to generate cypher queries. These contexts are the node ids from the Schema.\n{schema}\n\nSome examples of contexts:\n{context}\n\nThe question is:\n{question}\"\"\"\n\nprompt = PromptTemplate.from_template(template=TEMPLATE)\nchain = (\n {\n \"schema\": schema,\n \"context\": vector_retriever_chain | extract_relevant_docs, \n \"question\": RunnablePassthrough()\n }\n | prompt\n)\nchain.invoke(\"my question?\")\n\nIn this chain, I am getting some context from a vector retriever which I am passing to a function called extract_relevant_docs() that will parse the result and get format I want.\nThe tricky part here is the variable 'schema' which I also want to supply to design my prompt. How can I pass these variables during the chain.invoke().\nThank you"} +{"id": "000419", "text": "I'm working with LangChain's Chroma VectorStore, and I'm trying to filter documents based on a list of document names.\nI have a list of document names as follows:\nlst = ['doc1', 'doc2', 'doc3']\n\nI also have doc_name metadata in my VectorStore. Currently, I\u2019m using the following code to retrieve documents:\nbase_retriever = chroma_db.as_retriever(search_kwargs={'k': 10})\n\nHowever, I\u2019m not sure how to modify this code to filter documents based on my list of document names. Could anyone guide me on how to achieve this? Any help would be greatly appreciated!"} +{"id": "000420", "text": "I have a simple series of nodes that are chained together with conditional edges. The first two are shown here:\ndef email_router(state: TypedDict) -> None:\n node_results, state = get_emails(state)\n if node_results == \"New Email\":\n state[\"internal_state\"] = \"topics_router\"\n else:\n state[\"errors\"] = \"No New Mail\"\n state[\"internal_state\"] = \"return_final_status\"\n\ndef check_internal_state(state: TypedDict) -> str:\n logging.debug(f\"Internal State: {state[\"internal_state\"]}\")\n return state[\"internal_state\"]\n\nIn the email router, I am setting the state value \"internal_state\" to either \"topics_router\" or \"return_final_status\" which are two other nodes. The check_internal_state is there simply to enable the conditional edge which looks as follows:\nworkflow.add_conditional_edges(\n \"email_router\",\n check_internal_state,\n {\n \"topics_router\": \"topics_router\",\n \"return_final_status\": \"return_final_status\",\n },\n)\n\nThe logging debug statement in the check_internal_state keeps returning an empty string which means that I am not properly carrying the state information across and I cannot figure out what I am doing wrong.\nThe code is set in classes. the one function compiling the code and saving the output as:\nglobal mailman\nmailman = workflow.compile()\n\nI have another function, in the same python file, then invoking mailman and calling functions that all reside in another python file but that are imported. I only bring this up as maybe separating things into multiple functions is causing the issue? Here is the overall workflow:\nworkflow = StateGraph(GraphState)\nworkflow.add_node(\"email_router\", email_router)\nworkflow.add_node(\"topics_router\", topics_router)\nworkflow.add_node(\"status_router\", status_router)\nworkflow.add_node(\"actions_router\", actions_router)\nworkflow.add_node(\"return_final_status\", return_final_status_node)\n\nworkflow.set_entry_point(\"email_router\")\n\nworkflow.add_conditional_edges(\n \"email_router\",\n check_internal_state,\n {\n \"topics_router\": \"topics_router\",\n \"return_final_status\": \"return_final_status\",\n },\n)\n\nworkflow.add_conditional_edges(\n \"topics_router\",\n check_internal_state,\n {\n \"status_router\": \"status_router\",\n \"return_final_status\": \"return_final_status\",\n },\n)\n\nworkflow.add_conditional_edges(\n \"status_router\",\n check_internal_state,\n {\n \"actions_router\": \"actions_router\",\n \"return_final_status\": \"return_final_status\",\n },\n)\n\nworkflow.add_edge(\"actions_router\", \"return_final_status\")\nworkflow.add_edge(\"return_final_status\", END)\n\nDoes anybody know what I am doing wrong? Thank you!"} +{"id": "000421", "text": "I am struggeling with basic chaining and passing input parameters through RunnableSequences in LangChain v0.2.\nI have two chains: code_chain and test_chain.\n\n\nNow I want to chain them together.\n\nThis is my current code:\ncode_prompt = PromptTemplate.from_template(\"Write a very short {language} function that will {task}\");\ncode_chain = code_prompt | llm | {\"code\": StrOutputParser()};\n\ntest_prompt = PromptTemplate.from_template(\"Write a test for the following {language} code:\\n{code}\");\ntest_chain = test_prompt | llm | {\"test\": StrOutputParser()};\n\nchain = code_chain | test_chain;\n\nresult = chain.invoke({\n \"language\": \"python\",\n \"task\": \"reverse a string\",\n});\n\nBecause code_chain does not retain the language parameter as an output, it is missing in the test_chain:\nKeyError: \"Input to PromptTemplate is missing variables {'language'}. Expected: ['code', 'language'] Received: ['code']\"\nHow do I pass the language input of the first chain to the language input of the second?"} +{"id": "000422", "text": "I'm trying to create a vector DB which will be populated with embeddings of articles from my employer's blog.\nI've got a Milvus instance up and running and am able to follow the walkthrough on the Langchain website.\nBased on the walkthrough, my implementation so far looks something like this:\ndef parseWPDataFile(filename):\n # redacted for brevity\n return {\n 'meta': parsed_headers,\n 'body': doc_body.strip()\n }\n\nparsed_doc = parseWPDataFile('sample_data.txt')\ntext_splitter = RecursiveCharacterTextSplitter(is_separator_regex=True, separators=['\\n+'], chunk_size=5000, length_function=len)\ndocs = text_splitter.create_documents([parsed_doc['body']], [parsed_doc['meta']])\nembeddings = OpenAIEmbeddings()\nvector_db = Milvus.from_documents(docs, embeddings, connection_args={\"host\": \"127.0.0.1\", \"port\": \"19530\"})\n\nThis being my first time using a vector database, I'm a little confused by that last line. The documentation for Milvus.from_documents indicates that it creates a vectorstore from documents, I guess, in memory. What I want is a persistent vectorstore that I can load stuff into and then later, in a separate script, pull from. I can't find any Langchain examples of this.\nHow do I create a persistent VectorStore, add to it, and get a reference to it later, in another script?"} +{"id": "000423", "text": "Here's my code:\nimport pickle, os\nfrom langchain_openai.chat_models import ChatOpenAI\nfrom langchain.schema import (\n AIMessage,\n HumanMessage,\n SystemMessage\n)\n\ndef execute_prompt(text, history, jarvis_setup):\n print(f\"You said: {text}\")\n history.append(HumanMessage(content = text))\n response = jarvis_setup(history)\n history.append(AIMessage(content = response.content))\n with open('JarvisMemory.txt', 'wb') as file:\n pickle.dump(history, file)\n \n print(response.content)\n\ndef main():\n jarvis_setup = ChatOpenAI(openai_api_key=\"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\", model = \"gpt-3.5-turbo\", temperature = 0.7, max_tokens = 400)\n #history = [SystemMessage(content=\"You are a human-like virtual assistant named Jarvis.\", additional_kwargs={})]\n if os.path.exists(\"JarvisMemory.txt\"):\n with open(\"JarvisMemory.txt\", \"rb\") as file:\n history = pickle.load(file)\n else:\n with open(\"JarvisMemory.txt\", \"wb\") as file:\n history = [SystemMessage(content=\"You are a human-like virtual assistant named Jarvis. Answer all questions as shortly as possible, unless a longer, more detailed response is requested.\", additional_kwargs={})]\n pickle.dump(history, file)\n \n while True:\n print(\"\\n\")\n print(\"Enter prompt.\")\n text = input().lower()\n print(\"Prompt sent.\")\n \n if text:\n execute_prompt(text, history, jarvis_setup)\n \n else:\n print(\"No prompt given.\")\n continue\n \nif __name__ == \"__main__\":\n main()\n\nAnd I get this error:\nLangChainDeprecationWarning: The method BaseChatModel.__call__ was deprecated in langchain-core 0.1.7 and will be removed in 0.3.0. Use invoke instead.\nwarn_deprecated(\nTraceback (most recent call last):\nFile \"C:\\Users\\maste\\Documents\\Coding\\Python\\Jarvis\\JarvisTextInpuhjhjghyjvjt.py\", line 44, in \nmain()\nFile \"C:\\Users\\maste\\Documents\\Coding\\Python\\Jarvis\\JarvisTextInpuhjhjghyjvjt.py\", line 37, in main\nexecute_prompt(text, history, jarvis_setup)\nFile \"C:\\Users\\maste\\Documents\\Coding\\Python\\Jarvis\\JarvisTextInpuhjhjghyjvjt.py\", line 12, in execute_prompt\nresponse = jarvis_setup(history)\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_core_api\\deprecation.py\", line 148, in warning_emitting_wrapper\nreturn wrapped(*args, **kwargs)\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_core\\language_models\\chat_models.py\", line 847, in call\ngeneration = self.generate(\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_core\\language_models\\chat_models.py\", line 456, in generate\nraise e\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_core\\language_models\\chat_models.py\", line 446, in generate\nself._generate_with_cache(\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_core\\language_models\\chat_models.py\", line 671, in _generate_with_cache\nresult = self._generate(\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_openai\\chat_models\\base.py\", line 520, in _generate\nmessage_dicts, params = self._create_message_dicts(messages, stop)\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_openai\\chat_models\\base.py\", line 533, in _create_message_dicts\nmessage_dicts = [_convert_message_to_dict(m) for m in messages]\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_openai\\chat_models\\base.py\", line 533, in \nmessage_dicts = [_convert_message_to_dict(m) for m in messages]\nFile \"C:\\Users\\maste\\AppData\\Roaming\\Python\\Python310\\site-packages\\langchain_openai\\chat_models\\base.py\", line 182, in _convert_message_to_dict\nif (name := message.name or message.additional_kwargs.get(\"name\")) is not None:\nAttributeError: 'SystemMessage' object has no attribute 'name'\nI'm guessing I need to add \".invoke\" somewhere in the code based on some research I did on the issue, but I'm a beginner.\nI found this website showcasing a very similar error and how to fix it: https://wikidocs.net/235780\nYou can translate the page to English with Google Translate and the translations are sufficient to understand. It says to add \".invoke\" in the place you can see shown on the website. Not sure how to implement this into my code though. Also, this might not be the right solution.\nI also looked at the Langchain website and it also says to use \"invoke\" but I can't find examples of it being used in a full line of code."} +{"id": "000424", "text": "Kind of new to Langchain/Qdrant but I'm building a recommendation engine to recommend users based on the contents of their associated PDF files, and I need to process PDFs and store their chunks in a vector database (I'm using Qdrant) for establishing context for the RAG agent. I don't exactly understand if this error is pertaining to some sort of version requirement, since the only prior error I found had to do with Langchain versions before 0.1.x:\nFound this prior issue\nHowever that issue was closed, and downgrading to versions below 0.1.x given the current releases of langchain doesn't seem feasible given what most of my current environment has recent dependencies.\nI tried different versions of langchain and different versions all of the corresponding langchain third-party libraries. Currently, these are the important parts of my requirements file (I think):\nlangchain==0.2.1\nlangchain-community==0.2.1\nlangchain-core==0.2.1\nlangchain-experimental==0.0.59\nlangchain-openai==0.1.7\nlangchain-text-splitters==0.2.0\nlangcodes==3.4.0\nlangsmith==0.1.57\n\nopenai==1.28.1 \npython==3.12.3\n\nLooking for some sort of workaround, or a diagnosis as to what may package may be causing the problem. My current program output:\nTraceback (most recent call last):\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/main.py\", line 28, in \n main()\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/main.py\", line 17, in main\n processor = PDFResumeProcessor(openai_api_key)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/gpt_class.py\", line 16, in __init__\n self.model = ChatOpenAI(api_key=openai_api_key, temperature=0, model_name='gpt-3.5-turbo-16k-0613')\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/.venv/lib/python3.12/site-packages/pydantic/v1/main.py\", line 339, in __init__\n values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/.venv/lib/python3.12/site-packages/pydantic/v1/main.py\", line 1064, in validate_model\n value = field.get_default()\n ^^^^^^^^^^^^^^^^^^^\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/.venv/lib/python3.12/site-packages/pydantic/v1/fields.py\", line 437, in get_default\n return smart_deepcopy(self.default) if self.default_factory is None else self.default_factory()\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/.venv/lib/python3.12/site-packages/langchain_core/language_models/base.py\", line 72, in _get_verbosity\n return get_verbose()\n ^^^^^^^^^^^^^\n File \"/Users/danielperlov/dperlov/JobsMatch/backend/ml_model/resume_preprocessor/.venv/lib/python3.12/site-packages/langchain_core/globals.py\", line 72, in get_verbose\n old_verbose = langchain.verbose\n ^^^^^^^^^^^^^^^^^\nAttributeError: module 'langchain' has no attribute 'verbose'"} +{"id": "000425", "text": "I am trying to make a chatbot using the Langchain-Openai.I have never done this before. I created a brand new api key, which was never used before. I copied code from the official langchain-openai docs, and the following code:\nfrom langchain_core.prompts import PromptTemplate\nfrom langchain_openai import OpenAI\n\nOPENAI_API_KEY = 'sk-proj-...'\n\ntemplate = \"\"\"Question: {question}\n\nAnswer: Let's think step by step.\"\"\"\n\nprompt = PromptTemplate.from_template(template)\n\nllm = OpenAI(openai_api_key=\"sk-proj-...\")\n\nllm_chain = prompt | llm\n\nquestion = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n\nllm_chain.invoke(question)\n\nIt is giving this very long error:\nTraceback (most recent call last):\n\n\nFile \"C:\\Users\\Acer\\OneDrive\\Documents\\VS_Code\\Python\\ai\\Langchain-Openai.py\", line 25, in \n llm_chain.invoke(question)\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\runnables\\base.py\", line 2399, in invoke\n input = step.invoke(\n ^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\language_models\\llms.py\", line 276, in invoke\n self.generate_prompt(\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\language_models\\llms.py\", line 633, in generate_prompt\n return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\language_models\\llms.py\", line 803, in generate\n output = self._generate_helper(\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\language_models\\llms.py\", line 670, in _generate_helper\n raise e\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_core\\language_models\\llms.py\", line 657, in _generate_helper\n self._generate(\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\langchain_openai\\llms\\base.py\", line 350, in _generate\n response = self.client.create(prompt=_prompts, **params)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_utils\\_utils.py\", line 277, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\resources\\completions.py\", line 528, in create\n return self._post(\n ^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_base_client.py\", line 1240, in post\n return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_base_client.py\", line 921, in request\n return self._request(\n ^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_base_client.py\", line 1005, in _request\n return self._retry_request(\n ^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_base_client.py\", line 1053, in _retry_request\n return self._request(\n ^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_base_client.py\", line 1005, in _request\n return self._retry_request(\n ^^^^^^^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_base_client.py\", line 1053, in _retry_request\n return self._request(\n ^^^^^^^^^^^^^^\n File \"C:\\Users\\Acer\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\openai\\_base_client.py\", line 1020, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}\n\nI even checked the openai api key usage website and it is not showing anything.\nAll of the code is from the Langchain-Openai docs.\nAm I doing something wrong?\nEDIT:\nAs @trazoM pointed out, the code works just fine but apparently I just needed to make a new key and link a credit card. Thanks @trazoM!"} +{"id": "000426", "text": "I do not understand why the below use of the PydanticOutputParser is erroring.\nThe docs do not seem correct - If I follow this exactly (i.e. use with_structured_output exclusively, without an output parser) then the output is a dict, not Pydantic class. So I thought I modified it consistently with so SO answers e.g. this\nfrom langchain.prompts import PromptTemplate\nfrom langchain_openai import ChatOpenAI\nfrom langchain.output_parsers import PydanticOutputParser\n\nfrom uuid import uuid4\nfrom pydantic import BaseModel, Field\n\nclass TestSummary(BaseModel):\n \"\"\"Represents a summary of the concept\"\"\"\n\n id: str = Field(default_factory=lambda: str(uuid4()), description=\"Unique identifier\")\n summary: str = Field(description=\"Succinct summary\")\n \nllm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0).with_structured_output(TestSummary)\nparser = PydanticOutputParser(pydantic_object=TestSummary)\nprompt = PromptTemplate(\n template=\"You are an AI summarizing long texts. TEXT: {stmt}\",\n input_variables=[\"stmt\"]\n)\nrunnable = prompt | llm | parser \nresult = runnable.invoke({\"stmt\": \"This is a really long piece of literature I'm too lazy to read\"})\n\nThe error is\nValidationError: 1 validation error for Generation\ntext\n str type expected (type=type_error.str)\n\nAs discussed, if I omit the output parser, I get a dict:\nrunnable = prompt | llm #| parser \nresult = runnable.invoke({\"stmt\": \"This is a really long piece of literature I'm too lazy to read\"})\ntype(result)\ndict"} +{"id": "000427", "text": "I have a document with three attributes: tags, location, and text.\nCurrently, I am indexing all of them using LangChain/pgvector/embeddings.\nI have satisfactory results, but I want to know if there is a better way since I want to find one or more documents with a specific tag and location, but the text can vary drastically while still meaning the same thing. I thought about using embeddings/vector databases for this reason.\nWould it also be a case of using RAG (Retrieval-Augmented Generation) to \"teach\" the LLM about some common abbreviations that it doesn't know?\nimport pandas as pd\n\nfrom langchain_core.documents import Document\nfrom langchain_postgres import PGVector\nfrom langchain_postgres.vectorstores import PGVector\nfrom langchain_openai.embeddings import OpenAIEmbeddings\n\nconnection = \"postgresql+psycopg://langchain:langchain@localhost:5432/langchain\"\nembeddings = OpenAIEmbeddings(model=\"text-embedding-3-small\")\ncollection_name = \"notas_v0\"\n\nvectorstore = PGVector(\n embeddings=embeddings,\n collection_name=collection_name,\n connection=connection,\n use_jsonb=True,\n)\n\n\n# START INDEX\n\n# df = pd.read_csv(\"notes.csv\")\n# df = df.dropna() # .head(10000)\n# df[\"tags\"] = df[\"tags\"].apply(\n# lambda x: [tag.strip() for tag in x.split(\",\") if tag.strip()]\n# )\n\n\n# long_texts = df[\"Texto Longo\"].tolist()\n# wc = df[\"Centro Trabalho Respons\u00e1vel\"].tolist()\n# notes = df[\"Nota\"].tolist()\n# tags = df[\"tags\"].tolist()\n\n# documents = list(\n# map(\n# lambda x: Document(\n# page_content=x[0], metadata={\"wc\": x[1], \"note\": x[2], \"tags\": x[3]}\n# ),\n# zip(long_texts, wc, notes, tags),\n# )\n# )\n\n# print(\n# [\n# vectorstore.add_documents(documents=documents[i : i + 100])\n# for i in range(0, len(documents), 100)\n# ]\n# )\n# print(\"Done.\")\n\n### END INDEX\n\n### BEGIN QUERY\n\nresult = vectorstore.similarity_search_with_relevance_scores(\n \"EVTD202301222707\",\n filter={\"note\": {\"$in\": [\"15310116\"]}, \"tags\": {\"$in\": [\"abcd\", \"xyz\"]}},\n k=10, # Limit of results\n)\n\n### END QUERY"} +{"id": "000428", "text": "So far my research only shows me how to filter to a specific a specific document or page but it doesn't show how to exclude some documents from the search.\nresults_with_scores = db.similarity_search_with_score(\"foo\", filter=dict(page=1))"} +{"id": "000429", "text": "I have this LangChain code for answering questions by getting similar docs from the vector store and using llm to get the answer of the query:\nllm_4 = AzureOpenAI(\n # temperature=0,\n api_version= os.environ['OPENAI_API_VERSION_4'], \n openai_api_key= os.environ['AZURE_OPENAI_API_KEY_4'], \n\n deployment_name=\"gpt4-deploy\",\n # model_name=\"gpt4-o\",\n azure_endpoint=os.environ['AZURE_OPENAI_ENDPOINT_4']\n )\n\n llm_3 = AzureOpenAI(\n # temperature=0,\n api_version= os.environ['OPENAI_API_VERSION_3'], \n openai_api_key= os.environ['AZURE_OPENAI_API_KEY_3'], \n\n deployment_name=\"test-deployment\", \n # deployment_name=\"gpt-16k-deployment\",\n # model_name=\"gpt-3.5-turbo-16k\",\n\n azure_endpoint=os.environ['AZURE_OPENAI_ENDPOINT_3']\n )\n\n response=get_answer(relavant_docs, user_input, llm_4)\n\n...\n#Create embeddings instance\ndef create_embeddings():\n #embeddings = OpenAIEmbeddings()\n embeddings = SentenceTransformerEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n # embeddings = SentenceTransformerEmbeddings(model_name=\"text-davinci-003\") \n return embeddings\n\n\ndef get_answer(docs, user_input, llm=None):\n if llm:\n chain = load_qa_chain(llm, chain_type=\"stuff\")\n else:\n chain = load_qa_chain(OpenAI(), chain_type=\"stuff\")\n\n with get_openai_callback() as cb:\n response = chain.run(input_documents=docs, question=user_input)\n return response\n\nIt's working with gpt3, but with gpt4 getting:\n\nBadRequestError: Error code: 400 - {'error': {'code': 'OperationNotSupported', 'message': 'The completion operation does not work with the specified model, gpt-4. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.'}}\n\nI tried what was suggested by these similar issues:\nHow to use the new gpt-3.5-16k model with langchain?\nI am trying to make a docs question answering program with AzureOpenAI and Langchain\nBut I still didn't figure out how to solve it!"} +{"id": "000430", "text": "I'm working on integrating LangChain with AzureOpenAI in Python and encountering a couple of issues. I've recently updated from a deprecated method to a new class implementation, but now I'm stuck with some errors I don't fully understand. Here's the relevant part of my code:\nfrom langchain_openai import AzureOpenAI as LCAzureOpenAI\n# from langchain.llms import AzureOpenAI <-- Deprecated\n\n# Create client accessing LangChain's class\nclient = LCAzureOpenAI(\n openai_api_version=api_version,\n azure_deployment=deployment_name,\n azure_endpoint=azure_endpoint,\n temperature=TEMPERATURE,\n max_tokens=MAX_TOKENS,\n model=model\n #,model_kwargs={'azure_openai_api_key': api_key}\n)\n\n# Attempt to send a chat message\nclient.chat(\"Hi\")\n\nThis results in the following error:\nAttributeError: 'AzureOpenAI' object has no attribute 'chat'\n\nWhen I replace client.chat(\"Hi\") with client.invoke(\"Hi\"), I get a different error:\nBadRequestError: Error code: 400 - {'error': {'code': 'OperationNotSupported', 'message': 'The completion operation does not work with the specified model, gpt-4. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.'}}\n\nHow can I resolve these errors?\nAny guidance or insights into these errors and how to resolve them would be greatly appreciated!"} +{"id": "000431", "text": "I make some gate like this:\nGate::define('update-post', function ($user, Post $post) {\n return $user->hasAccess(['update-post']) or $user->id == $post->user_id;\n});\n\nI checked my database and it has update-post access and the user id is same as in the post. but I got:\n\nThis action is unauthorized.\n\nerrors. so am I do some mistake here? thanks."} +{"id": "000432", "text": "I'm trying to store image in 3 different folder inside public folder now I'm able to store in two different folder but when I added 3rd folder path but it was not coping in 3rd folder help me to solve this.\nmy two folder copying Working code\n$input['student_photo'] = time().'.'.$request->student_photo->getClientOriginalExtension();\n $folder1 = public_path('public/path1/');\n $path1 = $folder1 . $input['student_photo']; // path 1\n $request->student_photo->move($folder1, $input['student_photo']); // image saved in first folder\n $path2 = public_path('public/path2/') . $input['student_photo']; // path 2\n \\File::copy($path1, $path2);\n\nI tired this code for copy 3rd folder but not working\n$input['student_photo'] = time().'.'.$request->student_photo->getClientOriginalExtension();\n $folder1 = public_path('public/path1/');\n $path1 = $folder1 . $input['student_photo']; // path 1\n $request->student_photo->move($folder1, $input['student_photo']); // image saved in first folder\n $path2 = public_path('public/path2/') . $input['student_photo']; // path 2\n $path3 = public_path('public/path3/') . $input['student_photo']; // path 3\n \\File::copy($path1, $path2, $path3);"} +{"id": "000433", "text": "I am trying to write an installer-type app using Laravel 10. Where the purpose is: if I don't set the database information it will redirect me to the setup database. When I fill up the form and submit, it performs the following task:\n\nUpdate database-related content in the .env file\nClear cache, config, and finally cache new config using the Artisan call.\nFinally migrate the database still using the Artisan call.\n\nAlthough the .env file content is updated, it still uses the last database-related info that existed before the update. I mean, suppose previously DB name was ab_setup, then I update it to nn_setup, .env file shows the DB name as nn_setup, but the browser responds:\n\nUnknown database ab_setup (Connection: mysql, SQL: select * from information_schema.tables where table_schema = nn_setup and table_name = migrations and table_type = 'BASE TABLE').\n\nI have no idea what's actually wrong. Here is my code:\n// Update .env file content\n$envContent = [\n 'DB_CONNECTION' => $request->database_connection,\n 'DB_HOST' => $request->database_host,\n 'DB_PORT' => $request->database_port,\n 'DB_DATABASE' => $request->database_name,\n 'DB_USERNAME' => $request->database_username,\n 'DB_PASSWORD' => $request->database_password\n];\n\nforeach( $envContent as $key => $value ) {\n $this->replace_env_value($key, $value);\n}\n\n// Clear Cache and Config & Cache new Config\nArtisan::call('cache:clear');\nArtisan::call('config:clear');\nArtisan::call('config:cache');\n\n// Migrate DB and Seed\nArtisan::call('migrate');\nArtisan::call('db:seed');\n\n// Create Admin User\n$user = new User();\n$user->name = $request->admin_name;\n$user->email = $request->admin_username;\n$user->password = Hash::make($request->admin_password);\n$user->save();\n\nDoes anyone have the idea what I miss that will fix the issue?"} +{"id": "000434", "text": "I'm trying to allow a user to Unsubscribe from a email link via the server side.\nSimply want them to click the link, add a column to my table and show them a page that tells them they have been unsubscribed.\nThe link looks like this:\nhttps://mysite.test/unsubscribe-confirm/eyJpdiI6Ims3Y3RJdHhvWTgyNGVLais1UXlzdlE9PSIsInZhbHVlIjoiUUF0NFZUUkNqTklmTTExTVRmZEdyaEUvN1kzcXpLNmNhRit3enpPNDE0ak5xalRaa1JQQS91elBCazIzTWFxayIsIm1hYyI6IjMzZDM4MDUwNTYzZjY1ZjQ4OTEyNDI3ZjJhY2M4NzgxNzgwMDFiMGQ2NjQ0ZjFiMGFjNGJlNjg3YzY0Zjc5NTkiLCJ0YWciOiIifQ==/1\n\nIn my job JobToSendEmails I'm creating the $unsubscribeLink like this:\n// Generate token\n$unsubscribeToken = encrypt($sponsor->email);\n// Generate unsubscribe link\n$unsubscribeLink = route('unsubscribe.confirm', ['token' => $unsubscribeToken, 'campaign_id' => $emailCampaign->id]);\n\nAnd then passing to my email like this:\nMail::to($sponsor->email)->send(new EmailToUsers(\n...\n$unsubscribeLink\n));\n\nThen in the EmailToUsers view I have this line:\nIf you no longer wish to receive these emails, you can [unsubscribe]({{ $unsubscribeLink }}) at any time.\n\nThe link goes to this route:\nRoute::post('/unsubscribe-confirm/{token}/{campaign_id}', [EmailPreferenceController::class, 'confirmUnsubscribe'])->name('unsubscribe.confirm');\n\nThen in the EmailPreferenceController the confirmUnsubscribe method looks like this:\n public function confirmUnsubscribe()\n {\n $token = Request::get('token');\n $user = User::findByToken($token);\n if ($user) {\n $emailCampaignId = Request::get('email_campaign_id');\n $emailCampaignPreference = EmailCampaignPreference::updateOrCreate(\n [\n 'user_id' => $user->id,\n 'email_campaign_id' => $emailCampaignId,\n ],\n [\n 'opt_out' => true,\n 'email' => $user->email,\n ]\n );\n\n return redirect()->route('unsubscribe');\n }\n Log::error('User not found by token: ' . $token);\n\n return response()->view('errors.404', [], 404);\n }\n\nfindByToken looks like this:\npublic static function findByToken($token): User\n {\n $email = decrypt($token);\n return static::where('email', $email)->first();\n }\n\nWhen I click the unsubscribe link in the email, I get the error:\n\nSymfony\u2009\\Component\u2009\\HttpKernel\u2009\\Exception\u2009\\MethodNotAllowedHttpException\n\"The GET method is not supported for route\nunsubscribe-confirm/eyJpdiI6Ims3Y3RJdHhvWTgyNGVLais1UXlzdlE9PSIsInZhbHVlIjoiUUF0NFZUUkNqTklmTTExTVRmZEdyaEUvN1kzcXpLNmNhRit3enpPNDE0ak5xalRaa1JQQS91elBCazIzTWFxayIsIm1hYyI6IjMzZDM4MDUwNTYzZjY1ZjQ4OTEyNDI3ZjJhY2M4NzgxNzgwMDFiMGQ2NjQ0ZjFiMGFjNGJlNjg3YzY0Zjc5NTkiLCJ0YWciOiIifQ==/1.\nSupported methods: POST.\"\n\nWhat can I do to fix this? I'm using a Post method on the route. I would like to keep this format in the email, [unsubscribe]({{ $unsubscribeLink }})\nAny help would be appreciated."} +{"id": "000435", "text": "I want to remove old images in my public folder when he want to change him profile photo.\nProfileController:\nif($request->hasFile('image')){\n $request->validate([\n 'image' => 'image|mimes:jpeg,png,jpg,svg|max:2048'\n ]);\n \n $imageName = $request->user()->id.'-'.time().'.'.$request->image->extension();\n $request->image->move(public_path('users'), $imageName);\n $path = \"users/\".$imageName;\n $request->user()->image = $path;\n $request->user()->save();\n}\n\ni tried somethings but i cant do it.\nThanks for your replys."} +{"id": "000436", "text": "I am passing a single $order record through to a Blade template, and finding that I cannot access, for example, any order customer attributes, despite them having been included in my query using with() and displaying correctly in a blade dump.\nI can't seem to put my finger on what I'm doing wrong here to prevent me from accessing $order->customer->first_name, for example. Any pointers would be hugely appreciated.\nThe blade dump of {{ $order }} appears like so:\n{\n \"id\": 10,\n \"customer\": {\n \"id\": 20,\n \"first_name\": \"John\",\n \"last_name\": \"Doe\"\n },\n \"order_date\": \"2023-01-10 10:31:51\",\n \"created_at\": \"2023-02-11T10:37:05.000000Z\",\n ...\n}\n\nBut any attempt to access $order->customer->first_name results in this 500 response:\nAttempt to read property \"first_name\" on int\n$order->customer returns the customer id, which makes sense, but is contrary to what the blade dump would suggest should be the case.\nA dd() on the $order variable before sending it to the blade template gives me this output (shortened)\nApp\\Models\\Order {#1463 \u25bc // app/Http/Controllers/OrderController.php:139\n ...\n #attributes: array:25 [\u25bc\n \"id\" => 10\n \"customer\" => 20\n \"rfp\" => 69\n \"order_date\" => \"2023-01-10 10:31:51\"\n \"created_at\" => \"2023-02-11 23:37:05\"\n ...\n ]\n #original: array:25 [\u25b6]\n #changes: []\n #casts: array:1 [\u25b6]\n #classCastCache: []\n #attributeCastCache: []\n #dateFormat: null\n #appends: []\n #dispatchesEvents: []\n #observables: []\n #relations: array:4 [\u25bc\n \"rfp\" => App\\Models\\User {#1478 \u25b6}\n \"customer\" => App\\Models\\Customer {#1480 \u25bc\n #connection: \"mysql\"\n #table: \"customers\"\n #primaryKey: \"id\"\n #keyType: \"int\"\n +incrementing: true\n #with: []\n #withCount: []\n +preventsLazyLoading: false\n #perPage: 15\n +exists: true\n +wasRecentlyCreated: false\n #escapeWhenCastingToString: false\n #attributes: array:3 [\u25bc\n \"id\" => 20\n \"first_name\" => \"John\"\n \"last_name\" => \"Doe\"\n ]\n #original: array:3 [\u25b6]\n #changes: []\n #casts: array:1 [\u25b6]\n #classCastCache: []\n #attributeCastCache: []\n #dateFormat: null\n #appends: []\n #dispatchesEvents: []\n #observables: []\n #relations: []\n #touches: []\n +timestamps: true\n +usesUniqueIds: false\n #hidden: []\n #visible: []\n #fillable: array:17 [\u25b6]\n #guarded: array:1 [\u25b6]\n #searchable: array:9 [\u25b6]\n #forceDeleting: false\n }\n ...\n ]\n ...\n}\n\nWeb.php\nRoute::get('/orderpdf/{id}', [OrderController::class, 'generateOrderPDF']);\n\nOrderController.php\npublic static function generateOrderPDF($id, Request $request) {\n\n $select = [\n 'customer' => function($query) {\n $query->select('id', 'first_name', 'last_name');\n }];\n \n $order = Order::with($select)->find($id);\n \n if (!$order) {\n return response()->json(['message' => 'Order not found.'], 404);\n }\n\n $pdfHtml = View::make('order-confirmation-pdf', ['order' => $order])->render();\n\n if ($request->input('preview')) {\n return $pdfHtml;\n }\n \n ... \n\nOrder.php (Model)\npublic function customer()\n {\n return $this->belongsTo(Customer::class, 'customer');\n }\n\n\nCustomer.php (Model)\npublic function orders()\n {\n return $this->hasMany(Order::class);\n }"} +{"id": "000437", "text": "I made this route:\nRoute::resource('questionnaire_correction/{id}', QuestionnaireCorrectionController::class)->only(['create', 'store']);\n\nUsing this route, I get Route [questionnaire_correction.create] not defined..\nChecking the routes with php artisan r:l --path=quesionnaire I get:\nPOST questionnaire_correction/{questionnaireid} .................... {questionnaireid}.store \u203a QuestionnaireCorrectionController@store\nGET|HEAD questionnaire_correction/{questionnaireid}/create ............. {questionnaireid}.create \u203a QuestionnaireCorrectionController@create\n\nSomehow, it is creating the route name {questionnaireid}.store. I am using Laravel 10.\nCreating the routes like this works:\nRoute::get('/questionnaire_correction/{questionnaireid}', [QuestionnaireCorrectionController::class, 'create'])->name('questionnaire_correction.create');\nRoute::post('/questionnaire_correction/{questionnaireid}', [QuestionnaireCorrectionController::class, 'store'])->name('questionnaire_correction.store');"} +{"id": "000438", "text": "after updating to laravel 10, i cant perform raw query like this:\n$statement = 'SELECT';\n foreach ($tables = collect(availableTables()) as $name => $table_name) {\n if ($tables->last() == $table_name) {\n $statement .= \"( SELECT COUNT(*) FROM $table_name) as {$table_name}\";\n }\n else {\n $statement .= \"( SELECT COUNT(*) FROM $table_name) as {$table_name}, \";\n }\n }\n $query = DB::select(DB::raw($statement));\n\nthis returns me the following error:\nPDO::prepare (): Argument #1 ($query) must be of type string, Illuminate\\Database|Query\\ Expression given\n\nwhat should i do to fix this issue"} +{"id": "000439", "text": "i was storing form data and got the error, \"SQLSTATE[42S02]: Base table or view not found: 1146 Table 'crm.email' doesn't exist (Connection: mysql, SQL: select count(*) as aggregate from email where email = TEST@gmail.com)\"\nI send the form data to server, this is my code\n\n
\n @csrf\n
\n \n Name\n @error('name')\n

{{$message}}

\n @enderror\n
\n
\n \n Email\n @error('email')\n

{{$message}}

\n @enderror\n
\n
\n \n Phonenumber\n @error('phonenumber')\n

{{$message}}

\n @enderror\n
\n
\n \n Address\n @error('address')\n

{{$message}}

\n @enderror\n
\n
\n \n Zip Code\n @error('zipcode')\n

{{$message}}

\n @enderror\n
\n
\n \n Country\n @error('country')\n

{{$message}}

\n @enderror\n
\n
\n \n Card Holder\n
\n
\n \n Card Number\n
\n
\n \n Expire Date\n
\n
\n \n CVV\n
\n
\n \n Remarks\n @error('remarks')\n

{{$message}}

\n @enderror\n
\n
\n \n
\n \n\nand ther server code\n\npublic function CustomerDataStore(Request $req)\n {\n $validatdada = $req->validate([\n 'name'=> 'required',\n 'email'=> 'required|unique:email',\n 'phonenumber'=> 'required',\n 'address'=> 'required',\n 'zipcode'=> 'required',\n 'country'=> 'required',\n 'remarks'=> 'required'\n ]);\n\n\n return view('CustomerEntry.successful');\n }\n\n\ni created table name \"customerdata\" but when i try to store data through the controller and model(\"customerdata\"), it gave me the error and continuously giving me the error even i deleted the table name \"customerdata\" and its model and also rollback the migration.\nanyone know why i'm getting the error even i am not using the database just sending the data?"} +{"id": "000440", "text": "I have these models:\n\nOffer\nOfferReport\nVendor1Report\nVendor2Report\nVendor3Report\n\nOffer table definition:\nSchema::create('offers', function (Blueprint $table) {\n $table->id();\n $table->string('slug')->unique();\n $table->string('name')->nullable()->default(null);\n $table->timestamps();\n});\n\nOfferReport table definition:\nSchema::create('offer_reports', function (Blueprint $table) {\n $table->id();\n $table->foreignIdFor(Offer::class)->constrained();\n $table->unsignedInteger('visitors_count')->default(0);\n $table->unsignedInteger('customers_count')->default(0);\n $table->unsignedInteger('sales_count')->default(0);\n $table->unsignedInteger('sales_amount')->default(0);\n $table->timestamp('starts_at')->nullable();\n $table->timestamp('ends_at')->nullable();\n $table->timestamps();\n $table->unique(['offer_id', 'starts_at', 'ends_at'], 'offer_report_range_unique');\n});\n\nEach of the Vendor*Report tables have the following general structure which varies depending the vendor:\nSchema::create('vendor1_reports', function (Blueprint $table) {\n $table->id();\n $table->foreignIdFor(Offer::class)->constrained();\n\n // Column names are variable depending on vendor, but have some correlate on the OfferReport model.\n\n $table->timestamp('starts_at')->nullable();\n $table->timestamp('ends_at')->nullable();\n $table->timestamps();\n $table->unique(['offer_id', 'starts_at', 'ends_at'], 'offer_report_range_unique');\n});\n\nThis is the OfferReportSource pivot:\nclass OfferReportSource extends MorphPivot\n{\n use HasFactory;\n\n protected $table = 'offer_report_sources';\n\n public function getMorphClass(): string\n {\n return 'offer_report_sources';\n }\n\n public function offerReport(): BelongsTo\n {\n return $this->belongsTo(OfferReport::class);\n }\n\n public function source(): MorphTo\n {\n return $this->morphTo();\n }\n}\n\nThis is the migration for that pivot:\nSchema::create('offer_report_sources', function (Blueprint $table) {\n $table->id();\n $table->foreignIdFor(OfferReport::class)->constrained();\n $table->morphs('source'); // Vendor1Report, Vendor2Report, etc.\n $table->timestamps();\n});\n\nI tried creating this relationship on the OfferReport model:\npublic function sources(): MorphToMany\n{\n return $this->morphToMany(\n OfferReportSource::class,\n 'source',\n 'offer_report_sources',\n 'offer_report_id',\n 'source_id'\n )->using(OfferReportSource::class);\n}\n\nWhen I try to aggregate, I am using this query to check if a particular vendor report is already associated with the combined OfferReport for the particular date range:\nOfferReport::where('offer_id', $vendorReport->offer_id)\n ->where('starts_at', '>=', $vendorReport->starts_at->startOfDay())\n ->where('ends_at', '<=', $vendorReport->ends_at->endOfDay())\n ->whereHas('sources', function (Builder $query) use ($vendorReport) {\n $query->where('source_type', $vendorReport->getMorphClass())\n ->where('source_id', $vendorReport->id);\n })\n ->firstOrNew();\n\nThis always causes the following error:\nSQLSTATE[42000]: Syntax error or access violation: 1066 Not unique table/alias: 'offer_report_sources' (Connection: mysql, SQL: select * from `offer_reports` where `offer_id` = 31 and `starts_at` >= 2023-03-31 00:00:00 and `ends_at` <= 2023-03-31 23:59:59 and exists (select * from `offer_report_sources` inner join `offer_report_sources` on `offer_report_sources`.`id` = `offer_report_sources`.`source_id` where `offer_reports`.`id` = `offer_report_sources`.`offer_report_id` and `offer_report_sources`.`source_type` = offer_reports and `source_type` = vendor1_reports and `source_id` = 1) limit 1)\n\nIf this is a new record, I compile all the data in the specific way to each vendor, save the entry, and then I try to attach the vendor report to the new OfferReport:\n$offerReport->sources()->attach($vendorReport);\n\nIf I try to do the attachment above (assuming I just skipped the broken whereHas part of the firstOrNew check then I get this error:\nSQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`webseeds`.`offer_report_sources`, CONSTRAINT `offer_report_sources_offer_report_id_foreign` FOREIGN KEY (`offer_report_id`) REFERENCES `offer_reports` (`id`)) (Connection: mysql, SQL: insert into `offer_report_sources` (`offer_report_id`, `source_id`, `source_type`) values (2, 10, offer_report_sources))\n\nObviously offer_report_sources is showing up in the wrong places according to the query errors, but I can't seem to figure out how to structure my polymorphic pivot relationship method source() to handle these variable table names that could be referenced as part of the polymorph."} +{"id": "000441", "text": "When I start the run docker and meilisearch container doesn't run with this error :\n\n2023-04-10 17:23:44 Error: Your database version (1.0.2) is incompatible with your current engine version (1.1.0).\n2023-04-10 17:23:44 To migrate data between Meilisearch versions, please follow our guide on https://docs.meilisearch.com/learn/update_and_migration/updating.html\n\nWhen I run my project yesterday everything was good but today I can't start the run meilisearch."} +{"id": "000442", "text": "I'm trying to show data from a database in a view where the view isn't showing a thing, except the HTML code for it. This is an piece of the code I'm trying to make it work.\n
\n
\n Nombre del Indicador: \n {{ $indicadorFinanciero->nombreIndicador }}\n
\n
\n\nI get no error from Laravel saying that something is wrong, but clearly there is something messed up with the variables, or the way I'm calling them, because when I tweak them, and try to change it, I get errors from Laravel.\nWhat is weird for me is that I already manage to call the data from the Database and show it in a index view, I made the difference, and I went for it using a foreach loop.\nWhat I'm trying to do is to do the same as in my index view but only show it (or read it, if we take into account that it is a CRUD I'm making). the code for the index is:\n@extends('indicadoresfinancieros.layout')\n\n@section('content')\n\n
\n
\n
\n

Indicadores Financieros

\n
\n \n
\n
\n\n@if($message = Session::get('success'))\n
\n

{{ $message }}

\n
\n@endif\n\n
{{ 'Order' }}{{ 'Expected On' }}{{ 'Status' }}
\n \n \n \n \n \n \n \n \n \n @foreach ($indicadoresfinancieros as $indicadorfinanciero)\n \n \n \n \n \n \n \n \n \n @endforeach\n
IDNombre del IndicadorCodigo del IndicadorUnidad de medida del IndicadorValor del IndicadorFecha de RegistroAccion
{{ $indicadorfinanciero->id }}{{ $indicadorfinanciero->nombreIndicador }}{{ $indicadorfinanciero->codigoIndicador }}{{ $indicadorfinanciero->unidadMedidaIndicador }}{{ $indicadorfinanciero->valorIndicador }}{{ $indicadorfinanciero->fechaRegistro }}\n
id) }}\" method=\"POST\">\n id) }}\" class=\"btn btn-info\">Mostrar\n id) }}\" class=\"btn btn-primary\">Editar\n @csrf\n @method('DELETE')\n \n
\n
\n\n
\n {{ $indicadoresfinancieros->links('pagination::bootstrap-5'); }}\n
\n\n@endsection\n\nAs we see I'm a going through with the foreach, but in this case the variables are different, because I took from my IndicadorFinancieroController where I saved some data in the $indicadoresfinancieros variable.\nPaginate(5);\n\n return view('indicadoresfinancieros.index', compact('indicadoresfinancieros'))->with(request()->input('page'));\n }\n\n /**\n * Show the form for creating a new resource.\n */\n public function create()\n {\n return view('indicadoresfinancieros.create');\n\n //\n }\n\n /**\n * Store a newly created resource in storage.\n */\n public function store(Request $request)\n {\n //validar\n $request->validate([\n 'nombreIndicador'=>'required',\n 'codigoIndicador'=>'required',\n 'unidadMedidaIndicador'=>'required',\n 'valorIndicador'=>'required',\n 'fechaIndicador'=>'required'\n ]);\n\n //crear entrada\n IndicadorFinanciero::create($request->all());\n\n //redirigir\n return redirect()->route('indicadoresfinancieros.index')->with('success', 'Entrada creada exitosamente');\n\n }\n\n /**\n * Display the specified resource.\n */\n public function show(IndicadorFinanciero $indicadorFinanciero)\n {\n return view('indicadoresfinancieros.show', compact('indicadorFinanciero'));\n }\n\n /**\n * Show the form for editing the specified resource.\n */\n public function edit(IndicadorFinanciero $indicadorFinanciero)\n {\n //\n }\n\n /**\n * Update the specified resource in storage.\n */\n public function update(Request $request, IndicadorFinanciero $indicadorFinanciero)\n {\n //\n }\n\n /**\n * Remove the specified resource from storage.\n */\n public function destroy(IndicadorFinanciero $indicadorFinanciero)\n {\n //\n }\n}"} +{"id": "000443", "text": "I was early into working on a Laravel 10/Inertia 1/Vue 3 SSR app, and I noticed a Hydration node mismatch error show up out of nowhere. After a fair bit of removing things, I was left with the following:\nweb.php:\nRoute::get('/', function () {\n return Inertia::render('Welcome');\n});\n\nLayout.vue:\n\n\nWelcome.vue:\n\n\nRemoving the layout from app.js fixes it, but I don't really see how. Additionally, it seems that adding the layout by just using it as a component, works fine."} +{"id": "000444", "text": "I want to connect my terminal to sail for running commands and edit files in sail containers.\nI found that I can execute php artisan commands like:\nsail artisan migrate replacing php to sail.\nBut I want to get into: /var/www/project in container to execute php artisan migrate\nWhat aliases exists for this?"} +{"id": "000445", "text": "I have copied the Laravel password resetting up to the 3third part OF the code it's all good on the 4th part it won't work that after clicking reset it should redirect me to the login page.\nFirst route the view\nSecond the View submit\nThird The view of the reset link sent to the email.\nFourth Is the submit of the reset link is not working as intended\nafter clicking submit nothing happened and password didn't change at all\nHere are the routes\nRoute::get('/forgot-password', function () {\n return view('auth.forgot-password');\n})->middleware('guest')->name('password.request');\n\nRoute::post('/forgot-password', function (Request $request) {\n $request->validate(['email' => 'required|email']);\n\n $status = Password::sendResetLink(\n $request->only('email')\n );\n\n return $status === Password::RESET_LINK_SENT\n ? back()->with(['status' => __($status)])\n : back()->withErrors(['email' => __($status)]);\n})->middleware('guest')->name('password.email');\n\nRoute::get('/reset-password/{token}', function (string $token) {\n return view('auth.reset-password', ['token' => $token]);\n})->middleware('guest')->name('password.reset');\n\n\nRoute::post('/reset-password', function (Request $request) {\n $request->validate([\n 'token' => 'required',\n 'email' => 'required|email',\n 'password' => 'required|min:8|confirmed',\n ]);\n\n $status = Password::reset(\n $request->only('email', 'password', 'password_confirmation', 'token'),\n function (User $user, string $password) {\n $user->forceFill([\n 'password' => Hash::make($password)\n ])->setRememberToken(Str::random(60));\n\n $user->save();\n\n event(new PasswordReset($user));\n }\n );\n\n return $status === Password::PASSWORD_RESET\n ? redirect()->route('login')->with('status', __($status))\n : back()->withErrors(['email' => [__($status)]]);\n})->middleware('guest')->name('password.update');\n\nThe view\n\n
\n
\n @csrf\n @if (Session::has('message'))\n
\n {{ Session::get('message') }}\n
\n @endif\n

RESET PASSWORD

\n\n
\n \n \n\n @error('email')\n

{{ $message }}

\n @enderror\n
\n\n
\n \n \n\n @error('password')\n

{{ $message }}

\n @enderror\n
\n\n
\n \n \n\n @error('password_confirmation')\n

{{ $message }}

\n @enderror\n
\n \n\n\n
\n
\n
"} +{"id": "000446", "text": "Please find below the screenshots for the reference;\n\nOn the above Screenshot 1 displays the users dropdown to search the specific user and their repective orders.\n\nOn the above Screenshot 2 when the particular user is selected then the respective orders belongs to the user is displayed which is fine but the problem is that it is displaying id of the User but not the name when no orders is present for the respective user.\nreport_by_user.blade.php\n@extends('admin.admin_dashboard')\n@section('admin')\n\n
\n \n
\n
Ecommerce Report
\n
\n \n
\n \n
\n \n \n
\n \n \n\n\n
\n \n\n
\n @csrf\n
\n
\n \n
\n
Search By User
\n \n\n \n \n\n\n
\n \n
\n \n \n
\n
\n
\n\n\n\n
\n\n \n
\n\n\n\n\n@endsection\n\nreport_by_user_show.blade.php\n @extends('admin.admin_dashboard')\n @section('admin')\n \n
\n \n
\n
All Order By User Report
\n
\n \n
\n
\n
\n \n
\n
\n
\n \n

Search By User Name : {{ $userName }}

\n
\n
\n
\n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n @foreach($orders as $key => $item) \n \n \n \n \n \n \n \n \n \n \n @endforeach\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SlDate Invoice Amount Payment State Action
{{ $key+1 }} {{ $item->order_date }}{{ $item->invoice_no }}AED {{ $item->amount }}{{ $item->payment_method }} {{ $item->status }}\n id) }}\" class=\"btn btn-info\" title=\"Details\"> \n \n id) }}\" class=\"btn btn-danger\" title=\"Invoice Pdf\"> \n \n \n
SlDate Invoice Amount Payment State Action
\n
\n
\n
\n \n \n \n
\n\n@endsection\n\nReportController.php\nclass ReportController extends Controller {\n \n public function OrderByUser(){\n $users = User::where('role','user')->latest()->get();\n return view('backend.report.report_by_user',compact('users'));\n\n}// End Method \n\npublic function SearchByUser(Request $request){\n $users = $request->user;\n $orders = Order::where('user_id',$users)->latest()->get();\n return view('backend.report.report_by_user_show',compact('orders','users'));\n }// End Method \n}\n\nOrder.php\nclass Order extends Model\n{\n use HasFactory;\n protected $guarded = [];\n \n public function user(){\n return $this->belongsTo(User::class,'user_id','id');\n }\n} \n\nAny suggestions are most welcome.\nThank you in advance."} +{"id": "000447", "text": "I'm kinda new to Laravel and doing a project using the argon dashboard as a base (https://github.com/creativetimofficial/argon-dashboard-laravel) in Laravel 10.8.0. I'm having a problem where routes with multiples parameters break the HTML in the view. For example:\nRoute::get('/alertas', [AlertaController::class, 'show'])->name('alertas');\nRoute::get('/alertas/historial', [AlertaController::class, 'historial'])->name('alertas.historial');\n\nThe first route works fine but the second shows only the text with no formatting.\nIn both cases the returned view is the same, just with a different title:\n@extends('layouts.basic', ['title' => 'Historial Alertas', 'header' => true])\n@section('page')\n
\n

\n Hola Mundo\n

\n\n @can('see-some')\n

Esto lo ven los consultores

\n @endcan\n @can('see-all')\n

Esto lo ven los admins

\n @endcan\n
\n@endsection\n\nWhere the basic template is:\n@extends('layouts.app', ['type' => 'basic'])\n\n@section('content')\n @include('layouts.navbars.auth.topnav')\n
\n
\n
\n @if ($header ?? false)\n
\n
{{ $title }}
\n
\n @endif\n\n
\n @yield('page')\n
\n
\n
\n @include('layouts.footers.auth.footer')\n
\n@endsection\n\nand the app template is:\n\n\n\n\n \n \n \n \n \n Argon Dashboard 2 by Creative Tim\n \n\n \n \n \n \n \n \n {{-- --}}\n \n \n\n \n \n\n {{-- misc CSS --}}\n \n\n\n\n\n\n\n\n @if (($type ?? '') == 'basic')\n
\n @if ($showside ?? true)\n @include('layouts.navbars.auth.sidenav')\n @endif\n
\n @yield('content')\n
\n {{-- @include('components.fixed-plugin') --}}\n\n @else\n @guest\n @yield('content')\n @endguest\n\n @auth\n @if (in_array(request()->route()->getName(),\n ['sign-in-static', 'sign-up-static', 'login', 'register', 'recover-password', 'rtl', 'virtual-reality']))\n @yield('content')\n @else\n @if (\n !in_array(request()->route()->getName(),\n ['profile', 'profile-static']))\n
\n @elseif (in_array(request()->route()->getName(),\n ['profile-static', 'profile']))\n
\n \n
\n @endif\n @include('layouts.navbars.auth.sidenav')\n
\n @yield('content')\n
\n @include('components.fixed-plugin')\n @endif\n @endauth\n @endif\n\n\n\n \n \n \n \n \n \n \n \n \n \n @stack('js');\n\n\n\n\n\ni believe the relevant errors are:\nLoading failed for the \n
\n\nUser.Php\nnamespace App\\Livewire;\n\nuse Livewire\\Component;\n\nclass User extends Component\n{\n\n public $name = 'ali';\n\n\n public function render()\n {\n\n return view('livewire.user');\n }\n}\n\nOn page load Output shows \"ali\" in the input field, but when I type more characters in input fields its not updating variable value."} +{"id": "000484", "text": "I have been using laravel framework version 10 for in-house web application development, which will run on intranet not internet.\nI downloaded and installed xampp having php version 8.1.17, i downloaded and install composer with selecting php.exe of php version 8.1.17 installed at \"C:/xampp/php\"\nafter that i have downloaded laravel 10 project using composer at path \"C:/xampp/htdocs/project\".\nI have valid ssl certificate for my intranet domain for example abc.example.com, now at \"C:/xampp/apache/extra/httpd-vhosts.conf\", i have placed following code to configure ssl certificate\n\n DocumentRoot \"C:/xampp/htdocs/project\"\n ServerName abc.example.com:444\n SSLEngine on\n SSLCertificateFile \"conf/ssl.crt/22112022.cer\"\n SSLCertificateKeyFile \"conf/ssl.crt/22112022-rsa.key\"\n SSLProtocol all -SSLv3\n \n Options All\n AllowOverride All\n Require all granted\n \n\n\nNow i am running my laravel project with command \"php artisan serve --host=abc.example.com --port=444\"\nwhen i am accessing my application using url \"https://abc.example.com:444/\", it is showing directory listing of my project at \"C:/xampp/htdocs/project\", Please check image below :\n\nMy laravel project is not executing, instead directory listing is displaying, ssl is resolving as expected. can anyone guide me with this ? i am new to laravel but not xampp."} +{"id": "000485", "text": "I am using Laravel Framework 10.15.0.\nI am trying to load my API-Keys the following way:\n $apiKeyOpenAI = env('OPENAI_API_KEY');\n $client = OpenAI::client($apiKeyOpenAI);\n\nIn my .env file the api key is clearly defined:\nOPENAI_API_KEY=xx-xxxxxxxxxxxxxxxxxxxxxxx\nHowever, when executing my application on the server I get that the $apiKeyOpenAI is null.\nStill my .env file has the OPENAI_API_KEY in it. I checked this!\nI tried to clear my cache php artisan config:clear , I still get the error:\n\n TypeError \n\n OpenAI::client(): Argument #1 ($apiKey) must be of type string, null given, called in /var/www/demo-website/app/Console/Commands/AdminCommand.php on line 151\n\n at vendor/openai-php/client/src/OpenAI.php:13\n 9\u2595 {\n 10\u2595 /**\n 11\u2595 * Creates a new Open AI Client with the given API token.\n 12\u2595 */\n \u279c 13\u2595 public static function client(string $apiKey, string $organization = null): Client\n 14\u2595 {\n 15\u2595 return self::factory()\n 16\u2595 ->withApiKey($apiKey)\n 17\u2595 ->withOrganization($organization)\n\n 1 app/Console/Commands/AdminCommand.php:151\n OpenAI::client()\n\n 2 app/Console/Commands/AdminCommand.php:39\n App\\Console\\Commands\\AdminCommand::generateContentUsingOpenAI()\n\n\n\nAny suggesitons what I am doing wrong?\nI appreciate your replies!\nUPDATE\nAfter deploying to the server I need to run this script so that it seems to work:\nRoute::get('/clear', function() {\n Artisan::call('cache:clear');\n Artisan::call('config:clear');\n\n return \"Cache, Config is cleared\";\n})->middleware(['auth', 'admin']);\n\nWhen deploying this script is also automatically run:\n#!/bin/sh\nset -e\n\necho \"Deploying application ...\"\n\n# Enter maintenance mode\n(php artisan down) || true\n # Update codebase\n git fetch origin deploy\n git reset --hard origin/deploy\n\n # Install dependencies based on lock file\n composer install --no-interaction --prefer-dist --optimize-autoloader\n\n # Migrate database\n php artisan migrate --force\n\n # Note: If you're using queue workers, this is the place to restart them.\n # ...\n\n\n # Clear cache\n # php artisan optimize\n\n php artisan config:cache\n php artisan route:clear\n php artisan route:cache\n php artisan view:clear\n php artisan view:cache\n php artisan auth:clear-resets\n php artisan cache:clear\n php artisan config:clear\n\n # Generate sitemap\n # php artisan sitemap:generate\n\n # Reload PHP to update opcache\n echo \"\" | sudo -S service php8.1-fpm reload\n# Exit maintenance mode\nphp artisan up\n\necho \"Application deployed!\""} +{"id": "000486", "text": "I have 2 separate columns for the date: the date itself in YYYY-mm-dd format, and a time column in time(7) datatype, for example 11:15:10.0000000\nHow can I check for rows that are in the future?\nI can get the first part, for the day itself:\nMyModel::where('date', '>=', Carbon::today())->get()\n\nBut when I try adding the time it doesn't work:\nMyModel::where('date', '>=', Carbon::today())->where('time', '>', Carbon::now()->format('H:i'))->get()\n\nbecause they are separate and now even though the date is in the future, the time is separate so there may be a situation where the time doesn't match. So I somehow need to have both the date and the time related to it in the future, not separately"} +{"id": "000487", "text": "I have an eloquent model in Laravel that I would like to mark an user_id attribute as deprecated using the @deprectated tag in PHPDoc.\nI can add the @property tag to my model to document user_id but if I try to add the deprecated tag my IDE (vscode) still does not inform me that the attribute is deprecated.\nLooking at the documentation I can't see any way of combining both @property and @deprecated.\nDoes anyone know a way for me to document this correctly?\nThe model\n/**\n * App\\Task\\Models\\Task\n *\n * @property null|int $user_id\n */\nfinal class Task extends Model\n{\n protected $guarded = ['id'];\n\n protected $casts = [\n 'user_id' => 'int',\n ];\n\n protected $fillable = [\n 'user_id',\n ];\n\n}\n\nAttempted Code\n@property @deprecated null|int $user_id\nVersions\n\nLaravel: 10\nPHP: 8.2\nVSCode: 1.80.2\nVSCode extension PHP Intelephense: 1.9.5"} +{"id": "000488", "text": "I'm currently developing a Laravel package. The service provider of this package relies on magic methods config('my.config') and config_path('../my-config.php') defined in Illuminate/Foundation/helpers.php. PHPStorm notifies that it can't find these magic methods.\n\nHow to make sure that these dependencies resolve?\nIs it possible to check which package installed a certain namespace in Composer?\n\nThings I tried:\n\nThe standalone package: https://packagist.org/packages/illuminate/foundation This package is apparently abandoned. No alternatives provided\nInstall the entire laravel/framework. This conflicts with the application with which I try to install the package. I also want to avoid installing unnecessary dependencies in the package.\nI tried a bunch of packages from Illuminate: config, container and contracts. When I check the vendor folder, I don't see any foundation directory in the Illuminate directory"} +{"id": "000489", "text": "I'm using the use Illuminate\\Validation\\Rules\\Password; on the Password Validation rules and the use Illuminate\\Support\\Facades\\Password; for the password reset the application is displaying error then I put the two of them together.\nvalidate([\n 'name' => ['required', 'min:3', 'max:255', Rule::unique('users', 'name')],\n 'email' => ['required', 'email', Rule::unique('users', 'email')],\n 'password' =>\n [\n 'required', 'confirmed',\n Password::min(8)\n ->max(255)\n ->mixedCase()\n ->letters()\n ->numbers()\n ->symbols()\n ->uncompromised(),\n ],\n ]);\n\nHere for the Password Reset\n// Submit Forgot Password \n public function forgotSubmit(Request $request)\n {\n $request->validate(['email' => 'required|email']);\n\n $status = Password::sendResetLink(\n $request->only('email')\n );\n\n return $status === Password::RESET_LINK_SENT\n ? back()->with(['status' => __($status)])->with('reset', 'We have e-mailed your password reset link!')\n : back()->withErrors(['email' => __($status)]);\n }"} +{"id": "000490", "text": "Is it possible to validate and compare between 2 time inputs using the available rules, or I will have to have custom rule in this case?\nI have 2 time inputs - start and end. They are in the format HH:mm and I need the end date to be greater than the start date.\nIs there a combination I can have with the available rules to do that, or I should just create a custom rule?"} +{"id": "000491", "text": "I am new to Laravel and here is my setup:\n\nI have a companies table which is linked to a plans table (plan_id)\nI have a function that seed the plans table with very specific data\nI have created a factory for the Company model but not for the Plan model as I don't want random data for this model (I have 3 plans with prices)\n\nMy issue is that when I run the Company factory, it creates new Plan entries instead of using the existing ones.\nHere is my DatabaseSeeder:\n/**\n * Seed the application's database.\n */\npublic function run(): void\n{\n // Create the plans.\n $this->createPlans();\n\n Company::factory()\n ->count(3)\n ->create();\n}\n\n/**\n * Create the plans.\n *\n * @return void\n */\nprivate function createPlans(): void\n{\n Plan::create([\n 'name' => __('Free'),\n 'amount' => 0.00,\n ]);\n\n Plan::create([\n 'name' => __('Pro'),\n 'amount' => 99.95,\n ]);\n\n Plan::create([\n 'name' => __('Business'),\n 'amount' => 199.95,\n ]);\n}\n\nAnd my CompanyFactory:\npublic function definition(): array\n{\n return [\n 'plan_id' => fake()->randomElement(Plan::all()->pluck('id')->toArray()),\n 'name' => fake()->company(),\n 'plan_status' => fake()->randomElement(['active', 'pending', 'canceled']),\n ];\n}\n\nEach time I seed the database, it creates x new entries for the plans instead of using existing entries. I have tried many things, but I can't get it to work... Could somebody point me in the right direction?"} +{"id": "000492", "text": "I just recently started working with Laravel 10.\nI'm using nwidart/laravel-modules v10 for creating modules in my new project.\nSince the project is going to be an SaaS product, I am also using stancl/tenancy for multiple tenants.\nI have a config file for every tenant and want to dynamically activate or deactivate the modules from the tenants config.\nThe Config-Key is ActivatedModules and looks like this:\n'ActivatedModules' => [\n ['id' => 'homepage', 'name' => 'Homepage', 'icon' => 'homepage', 'visibility' => 'extern', 'no' => 1], // Homepage\n],\n\nMy idea was to uses the method Module::all() of use Nwidart\\Modules\\Facades\\Module; as mentioned in the docs.\nMy Problem is that there is an error saying that this funcion is not existing and on the website comes the following error:\nFatal error: Uncaught RuntimeException: A facade root has not been set. in\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Support\\Facades\\Facade.php:350\nStack trace: #0\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Exceptions\\RegisterErrorViewPaths.php(16):\nIlluminate\\Support\\Facades\\Facade::__callStatic('replaceNamespac...',\nArray) #1\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Exceptions\\Handler.php(674):\nIlluminate\\Foundation\\Exceptions\\RegisterErrorViewPaths->__invoke() #2\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Exceptions\\Handler.php(655):\nIlluminate\\Foundation\\Exceptions\\Handler->registerErrorViewPaths() #3\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Exceptions\\Handler.php(586):\nIlluminate\\Foundation\\Exceptions\\Handler->renderHttpException(Object(Symfony\\Component\\HttpKernel\\Exception\\HttpException))\n#4 D:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Exceptions\\Handler.php(492):\nIlluminate\\Foundation\\Exceptions\\Handler->prepareResponse(Object(Illuminate\\Http\\Request),\nObject(Symfony\\Component\\HttpKernel\\Exception\\HttpException)) #5\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Exceptions\\Handler.php(409):\nIlluminate\\Foundation\\Exceptions\\Handler->renderExceptionResponse(Object(Illuminate\\Http\\Request),\nObject(RuntimeException)) #6\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Http\\Kernel.php(509):\nIlluminate\\Foundation\\Exceptions\\Handler->render(Object(Illuminate\\Http\\Request),\nObject(RuntimeException)) #7\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Http\\Kernel.php(148):\nIlluminate\\Foundation\\Http\\Kernel->renderException(Object(Illuminate\\Http\\Request),\nObject(RuntimeException)) #8\nD:\\Programmieren\\referee365\\public\\index.php(51):\nIlluminate\\Foundation\\Http\\Kernel->handle(Object(Illuminate\\Http\\Request))\n#9 D:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\resources\\server.php(16):\nrequire_once('D:\\\\Programmiere...') #10 {main} thrown in\nD:\\Programmieren\\referee365\\vendor\\laravel\\framework\\src\\Illuminate\\Support\\Facades\\Facade.php\non line 350\n\nI hope some of you can help me with this, in the best case, easy problem."} +{"id": "000493", "text": "when I can install Laravel password, I got this error, what exactly should I do to solve this problem, if you can help me step by step to solve the problem\n composer require laravel/passport\nhttps://repo.packagist.org could not be fully loaded (curl error 6 while downloading https://repo.packagist.org/packages.json: Could not resolve host: repo.packagist.org), package information was loaded from the local cache and may be out of date\n./composer.json has been updated\nRunning composer update laravel/passport\nLoading composer repositories with package information\nhttps://repo.packagist.org could not be fully loaded (curl error 6 while downloading https://repo.packagist.org/packages.json: Could not resolve host: repo.packagist.org), package information was loaded from the local cache and may be out of date\nUpdating dependencies\nYour requirements could not be resolved to an installable set of packages.\n\n Problem 1\n - laravel/passport[v11.5.0, ..., v11.8.4] require league/oauth2-server ^8.2 -> satisfiable by league/oauth2-server[8.2.0, ..., 8.5.3].\n - laravel/passport[v11.8.5, ..., v11.8.8] require lcobucci/jwt ^4.3|^5.0 -> satisfiable by lcobucci/jwt[4.3.0, 5.0.0].\n - league/oauth2-server[8.5.2, ..., 8.5.3] require lcobucci/jwt ^4.3 || ^5.0 -> satisfiable by lcobucci/jwt[4.3.0, 5.0.0].\n - laravel/passport[v0.1.0, ..., v0.2.4, v1.0.0, ..., v1.0.18, v2.0.0, ..., v2.0.11, v3.0.0, ..., v3.0.2, v4.0.0, ..., v4.0.3, v5.0.0, ..., v5.0.3, v6.0.0, ..., v6.0.7, v7.0.0, ..., v7.5.1] require guzzlehttp/guzzle ~6.0 -> found guzzlehttp/guzzle[6.0.0, ..., 6.5.8] but it conflicts with your root composer.json require (^7.2).\n - laravel/passport[v8.0.0, ..., v8.5.0, v9.0.0, ..., v9.3.2] require php ^7.2 -> your php version (8.2.0) does not satisfy that requirement.\n - laravel/passport v9.4.0 requires illuminate/auth ^6.18.31|^7.22.4 -> found illuminate/auth[v6.18.31, ..., v6.20.44, v7.22.4, ..., v7.30.6] but these were not loaded, likely because it conflicts with another require.\n - laravel/passport[v10.0.0, ..., v10.0.1] require php ^7.3 -> your php version (8.2.0) does not satisfy that requirement.\n - laravel/passport[v10.1.0, ..., v10.2.2] require illuminate/auth ^8.2 -> found illuminate/auth[v8.2.0, ..., v8.83.27] but these were not loaded, likely because it conflicts with another require.\n - laravel/passport[v10.3.0, ..., v10.3.2] require illuminate/auth ^8.2|^9.0 -> found illuminate/auth[v8.2.0, ..., v8.83.27, v9.0.0, ..., v9.52.15] but these were not loaded, likely because it conflicts with another require.\n - laravel/passport[v10.3.3, ..., v10.4.2] require illuminate/auth ^8.37|^9.0 -> found illuminate/auth[v8.37.0, ..., v8.83.27, v9.0.0, ..., v9.52.15] but these were not loaded, likely because it conflicts with another require.\n - laravel/passport[v11.0.0, ..., v11.4.0] require illuminate/auth ^9.0 -> found illuminate/auth[v9.0.0, ..., v9.52.15] but these were not loaded, likely because it conflicts with another require.\n - league/oauth2-server[8.2.0, ..., 8.5.1] require psr/http-message ^1.0.1 -> found psr/http-message[1.0.1, 1.1] but the package is fixed to 2.0 (lock file version) by a partial update and that version does not match. Make sure you list it as an argument for the update command.\n - lcobucci/jwt[4.3.0, 5.0.0] require ext-sodium * -> it is missing from your system. Install or enable PHP's sodium extension.\n - Root composer.json requires laravel/passport * -> satisfiable by laravel/passport[v0.1.0, ..., v0.2.4, v1.0.0, ..., v1.0.18, v2.0.0, ..., v2.0.11, v3.0.0, v3.0.1, v3.0.2, v4.0.0, v4.0.1, v4.0.2, v4.0.3, v5.0.0, v5.0.1, v5.0.2, v5.0.3, v6.0.0, ..., v6.0.7, v7.0.0, ..., v7.5.1, v8.0.0, ..., v8.5.0, v9.0.0, ..., v9.4.0, v10.0.0, ..., v10.4.2, v11.0.0, ..., v11.8.8].\n\nTo enable extensions, verify that they are enabled in your .ini files:\n - F:\\xampp\\php\\php.ini\nYou can also run `php --ini` in a terminal to see which files are used by PHP in CLI mode.\nAlternatively, you can run Composer with `--ignore-platform-req=ext-sodium` to temporarily ignore these required extensions.\n\nUse the option --with-all-dependencies (-W) to allow upgrades, downgrades and removals for packages currently locked to specific versions.\nYou can also try re-running composer require with an explicit version constraint, e.g. \"composer require laravel/passport:*\" to figure out if any version is installable, or \"composer require laravel/passport:^2.1\" if you know which you need.\n\nInstallation failed, reverting ./composer.json and ./composer.lock to their original content.\n\n\n\nI use Windows 10 and xampp or the following specifications:\nApache/2.4.54 (Win64) OpenSSL/1.1.1p PHP/8.2.0\nDatabase client version: libmysql - mysqlnd 8.2.0\nPHP extension: mysqli Documentation curl Documentation mbstring Documentation\nPHP version: 8.2.0\nenter image description here\nPlease help me to solve this problem if you can"} +{"id": "000494", "text": "This is the code from my Laravel application:\npublic function sendNotifications()\n {\n $matchingSubscriptions = DB::table('tournament_match_plan')\n ->join('push_subscriptions', 'push_subscriptions.age_group', '=', 'tournament_match_plan.league')\n ->where('tournament_match_plan.start', '=', '11:20:00')\n ->where('tournament_match_plan.team_1', '=', 'push_subscriptions.club')\n ->orwhere('tournament_match_plan.team_2', '=', 'push_subscriptions.club')\n ->get();\n\n dd($matchingSubscriptions);\n}\n\nHere is the debug message:\nIlluminate\\Support\\Collection {#751 \u25bc // app\\Http\\Controllers\\Guests\\GuestsPushController.php:97\n #items: []\n #escapeWhenCastingToString: false\n}\n\nWhy don't I get any result from my Laravel Code?\nI've tried the same query in phpMyAdmin:\nSELECT *\nFROM tournament_match_plan\nJOIN push_subscriptions ON push_subscriptions.age_group = tournament_match_plan.league\nWHERE tournament_match_plan.start = '11:20:00'\nAND (tournament_match_plan.team_1 = push_subscriptions.club OR tournament_match_plan.team_2 = push_subscriptions.club);\n\nWith the above code, I get the correct result."} +{"id": "000495", "text": "I've been developing in Laravel 10 in my local, and seeing the web using php artisan serve then I uploaded all those files to a webserver, updated .env file with the new DB and url data, and tried a couple of times and work ok!! But after some hours, it stoped working. I can access the root folder and see the Laravel main screen, but if I want to access some controller it says \"Target class does not exists\".\nI guess it worked on local, and then on the server the firsts times, until some cache refreshed and then it didn't work anymore.\nI accessed the webserver through ssh and run:\ncomposer update\ncomposer install --optimize-autoloader --no-dev as \nphp artisan cache:clear\nphp artisan route:clear\nphp artisan optimize:clear\n\nBut nothing seems to work. Any idea what could be happening?\nEDIT: If I add add the namespace in the route like this\nRoute::get('/landing', 'App\\Http\\Controllers\\LandingController@index');\n\ninstead of this:\nRoute::get('/landing', 'LandingController@index');\n\nit works fine! But I need to know WHY on local it works without adding the namespace and in the webserver it doesn't!\nEDIT 2: I move this to it's own subdomain, so it isn't in a subfolder anymore, and it still happens."} +{"id": "000496", "text": "I have created a custom class in laravel 10 located in:\nApp\\Helpers\\CompletedOrders\n\nThe class contain this code:\nselect('name')->where('id', $id)->get();\n\nThen I use it in the view {{$user_name[0]->name}}.\nIt shows special characters as desired. However, as a quicker solution to get the user name I decided to use\n$user_name = User::where('id', $id)->pluck('name');\n\nIt shows some special characters B\\u00fcy\\u00fck instead of B\u00fcy\u00fck, for example, when I use it {{$user_name}} in my view.\nIs there any missing part in my code?"} +{"id": "000499", "text": "Is it possible to have eager loading without $append attributes? I have following code:\nProduct model:\npublic function category(): BelongsTo\n {\n return $this\n ->belongsTo(ProductCategory::class, 'product_category_id');\n }\n\npublic static function adminTableData(): Builder\n {\n return self::query()->select('*')\n ->with(['category' => function($query)\n {\n return $query->select('id', 'name');\n }]);\n }\n\nCompany model:\nprotected $appends = [\n 'admin_table_logo',\n 'edit',\n 'remove',\n ];"} +{"id": "000500", "text": "I am using Laravel 10.\nI am utilizing casting for a JSON column in the following manner:\nnamespace App\\Models;\n\nuse Illuminate\\Database\\Eloquent\\Model;\n\nclass Item extends Model\n{\n protected $casts = [\n 'meta' => 'collection', // here\n ];\n}\n\nWhen attempting to update a value within a collection directly, for instance:\n$model->meta->put('test', 100);\n$model->save();\n\nNothing occurs.\nWhen I assign the variable as it is, it functions correctly.\n$model->meta = ['test' => 100];\n$model->save();\n\nHowever what if I only need to update/add a single element?\nI have discovered the following workaround, but is this the intended behavior?\n$meta = $model->meta;\n$meta->put('test', 100);\n$model->meta = $meta;\n$model->save();\n\nIt appears that only direct assignment works in such a case, and it seems that the cast collection does not support any of its write functionality."} +{"id": "000501", "text": "I have tried several tutorials in the internet but it doesn't work or the language not changing, I already stored the translation into \\lang directory, and the session is already set Localization Session\nhere is my source code:\nThis is my controller :\nclass LocalizationController extends Controller\n{\n public function setLang($locale)\n {\n App::setLocale($locale);\n Session::put(\"locale\", $locale);\n\n return redirect()->back();\n }\n}\n\nThis is my Middleware :\nclass LocalizationMiddleware\n{\n /**\n * Handle an incoming request.\n *\n * @param \\Closure(\\Illuminate\\Http\\Request): (\\Symfony\\Component\\HttpFoundation\\Response) $next\n */\n public function handle(Request $request, Closure $next): Response\n {\n if (Session::get(\"locale\") != null) {\n App::setLocale(Session::get(\"locale\"));\n } else {\n Session::put(\"locale\", \"en\");\n App::setLocale(Session::get(\"locale\"));\n }\n\n return $next($request);\n }\n}\n\nThis is the route :\nRoute::get(\"locale/{lang}\", [LocalizationController::class, 'setLang']);\n\nLanguage Switcher :\n\"\"\n \"\"\n\nTranslated Element :\n

{{ __('home.hero.header') }}

\n\nI have tried several tutorials in the internet, please help."} +{"id": "000502", "text": "I'm writing a feature test and noticed that the session ID value changes after every request.\nFor instance, each dumped value will be different:\ndump(session()->getId());\n\n$response = $this->get('/?param=foo');\n\ndump(session()->getId());\n\n$response = $this->get('/?param=foo');\n\ndump(session()->getId());\n\nThis issue doesn't seem to happen outside of the testing environment. I can reload a page in the browser and the session ID remains consistent.\nThis is my web middleware group, which is pretty standard out of the box:\n\\App\\Http\\Middleware\\EncryptCookies::class,\n\\Illuminate\\Cookie\\Middleware\\AddQueuedCookiesToResponse::class,\n\\Illuminate\\Session\\Middleware\\StartSession::class,\n\\Illuminate\\View\\Middleware\\ShareErrorsFromSession::class,\n\\App\\Http\\Middleware\\VerifyCsrfToken::class,\n\\Illuminate\\Routing\\Middleware\\SubstituteBindings::class,\n\nI'm running this Laravel app on Valet on Mac with zero custom PHP configuration changes. Just whatever Homebrew provides with php@8.1.\nHere are the session values from php -i:\nsession.auto_start => Off => Off\nsession.cache_expire => 180 => 180\nsession.cache_limiter => nocache => nocache\nsession.cookie_domain => no value => no value\nsession.cookie_httponly => no value => no value\nsession.cookie_lifetime => 0 => 0\nsession.cookie_path => / => /\nsession.cookie_samesite => no value => no value\nsession.cookie_secure => 0 => 0\nsession.gc_divisor => 1000 => 1000\nsession.gc_maxlifetime => 1440 => 1440\nsession.gc_probability => 1 => 1\nsession.lazy_write => On => On\nsession.name => PHPSESSID => PHPSESSID\nsession.referer_check => no value => no value\nsession.save_handler => files => files\nsession.save_path => no value => no value\nsession.serialize_handler => php => php\nsession.sid_bits_per_character => 5 => 5\nsession.sid_length => 26 => 26\nsession.upload_progress.cleanup => On => On\nsession.upload_progress.enabled => On => On\nsession.upload_progress.freq => 1% => 1%\nsession.upload_progress.min_freq => 1 => 1\nsession.upload_progress.name => PHP_SESSION_UPLOAD_PROGRESS => PHP_SESSION_UPLOAD_PROGRESS\nsession.upload_progress.prefix => upload_progress_ => upload_progress_\nsession.use_cookies => 1 => 1\nsession.use_only_cookies => 1 => 1\nsession.use_strict_mode => 0 => 0\nsession.use_trans_sid => 0 => 0\nsession.trans_sid_hosts => no value => no value\nsession.trans_sid_tags => a=href,area=href,frame=src,form= => a=href,area=href,frame=src,form=\n\nIn my phpunit.xml file:\n\n\nWhat could cause the session to be regenerated every time?"} +{"id": "000503", "text": "I am upgrading a Laravel 5.2 site to 10.20.\nI followed the official documentation to encrypt/decryt the data the Laravel 10.x way as it was done differently on 5.2 (see it used the trait Encryptable).\nWhen retrieving encrypted data from the same database (with the same APP_KEY) in Laravel 5.2 I get Hello World whereas on 10.20 I get s:11:\"Hello World\".\nThe other attributes that are not encrypted are shown correctly.\nI searched the web for that but did not found anything pertaining to that issue. ChatGPT told me it was linked to serialization but even with that keyword I did not find the problem described somewhere.\nIn my model the attributes to be encrypted are cast like this :\n protected $casts = [\n 'my_not_encrypted_attribute' => 'boolean',\n 'my_encrypted_attribute_string' => 'encrypted' \n ];\n\n\n\nCan you tell me how to get rid of that prefix and only get the actual payload as in L5.2 ?\nThanks for your help!\nEDIT :\nManually defining the accessors in my model requires the use of unserialize() to get clean data :\n protected function myEncryptedAttributeString(): Attribute\n {\n return Attribute::make(\n get: fn (string $value) => unserialize(Crypt::decryptString($value)),\n );\n }\n\nSo it would mean that the data stored in the database were serialized before being stored.That's weird because the Encryptable trait shows no trace of serialize in the setters :\npublic function setAttribute($key, $value) {\n if (in_array($key, MyModel::getEncryptableAttributes())) {\n $value = Crypt::encrypt($value);\n }\n\n return parent::setAttribute($key, $value);\n }\n\nNor can I find trace of unserialize in the getters (although for the same database the output was clean) :\npublic function getAttribute($key) {\n $value = parent::getAttribute($key);\n //[...]\n $decrypted = Crypt::decrypt($value);\n //[...]\n return $decrypted;\n}"} +{"id": "000504", "text": "When I retrieve data directly from the model using the following approach:\nContact::where('company_id', auth()->user()->company_id)->get()\n\nThe query looks like this:\nselect * from \"contacts\" where \"company_id\" = 'xxx'\n\nBut when I retrieve data through direct relations like this:\nauth()->user()->company->contacts\n\nThe query looks like this and takes longer:\nselect * from \"contacts\" where \"contacts\".\"company_id\" = 'xxx' and \"contacts\".\"company_id\" is not null\n\nIt adds the additional;\nand \"contacts\".\"company_id\" is not null\n\npart, which affects performance negatively. What should I do to prevent this? Querying directly from models without using relations anywhere also doesn't seem very logical and clean.\nNote: company_id is indexed."} +{"id": "000505", "text": "i am developing a web app with laravel 10 with Livewire 3, trying to scripting for SPA (Single Page Application), problem is when i click \"account\" button its load the blank page with current year as a text \"2023\" instead of accoount.blade.php content.\nthis is what i have tried:\nfrom Livewire/App.php:\n\n\n\n \n \n \n Document\n\n @livewireStyles\n\n\n
\n\n @include('header')\n\n
\n\n @yield('content')\n\n
\n\n
\n @livewireScripts\n\n\n\nfrom resources\\views\\header.blade.php:\n\n\nfrom Livewire/Account.php:\nextends('app')\n ->section('content');\n }\n \n}\n\nfrom resources\\views\\livewire\\account.blade.php:\n
\n Account Page\n
\n\nthe result is once page loads:\n\nWhat wrong i have done here?"} +{"id": "000506", "text": "Let's say I have an uploads.index.view.php and an uploads.show.view.php, which lead to the UploadController, which is a resource controller.\nThe index view shows a list of user-uploaded files as such:\nUpload controller:\npublic function index() {\n return view('home', [\n 'uploads' => Upload::paginate(20)\n ]);\n}\n\nIndex view:\n@foreach ($uploads as $upload)\n\n {{$upload->user->name}}\n {{$upload->category->category}}\n @if ($upload->name)\n id)}}\">{{$upload->name}}\n @else\n id)}}\">{{$upload->title}}\n @endif\n\n@endforeach\n\nAnd the show view shows a single upload's page as such:\npublic function show(Upload $upload) {\n return view('single', [\n 'upload' => $upload\n ]);\n}\n\nThe index should contain a UPLOAD TITLE and the single view contains a Download.\nWhat's the most Eloquent way to have the browser download the corresponding $upload upon clicking any of the two links? Am I making a function inside of the index and show public functions in the UploadController, or a separate public function download() for the UploadController?\nWould appreciate a detailed answer, including what to put in the tags and the route, since I'm fairly new to this."} +{"id": "000507", "text": "I am using laravel 10 and building rest api. I am creating Categories resorce,controller, model\nRoute::apiResource('/admin-category', CategoryController::class);\nthis is the route in api.php.\norderBy('id', 'desc')\n ->paginate(10);\n }\n\n /**\n * Store a newly created resource in storage.\n */\n public function store(StoreCategoryRequest $request)\n {\n $data = $request->validated();\n $data['image'] = $request['image'];\n $data['description'] = $request['description'];\n Category::create($data);\n return response(['success' => true, 'msg' => 'New Category created!'], 201);\n }\n\n /**\n * Display the specified resource.\n */\n public function show(Category $category)\n {\n // \n }\n\n /**\n * Update the specified resource in storage.\n */\n public function update(UpdateCategoryRequest $request, Category $category)\n {\n $data = $request->validated();\n if(isset($request['image'])){\n $data['image'] = $request['image'];\n }\n $category->update($data);\n return response(['success' => true, 'msg' => 'Category updated!'], 200);\n }\n\n /**\n * Remove the specified resource from storage.\n */\n public function destroy(Category $category)\n {\n $category->delete();\n return response(['msg' => 'Deleted successfully', 'success' => true], 201);\n }\n}\n\nThis is my controller and here index() and store method is working as well but destroy and update is not working and not showing any error.\n\n */\n protected $fillable = [\n 'name',\n 'image',\n 'description'\n ];\n}\n\nThis is Category model.\nAnd migration is\nid();\n $table->string('name');\n $table->string('image')->nullable();\n $table->string('description')->nullable();\n $table->timestamps();\n });\n }\n\n /**\n * Reverse the migrations.\n */\n public function down(): void\n {\n Schema::dropIfExists('categories');\n }\n};\n\nI am trying from frontend. And category creating and fetch is working perfectly.\nwhat is the problem with destroy and update?\nI have checked many times payload is sending correctly with category id"} +{"id": "000508", "text": "I am attempting to install my Laravel 10 project in a subfolder so that I can access it at example.com/laravel-project/ instead of example.com/laravel-project/public/. Here is the .htaccess configuration I tried in the /laravel-project/ directory:\n\n RewriteEngine On\n RewriteRule ^(.*)$ public/ [L]\n\n\nThis configuration results in a Laravel-styled 404 error when accessing the subfolder URL. I've reviewed other questions and it seems that something has changed in Laravel 10 that prevents this setup from working as it did in previous versions. I'd like to avoid using php artisan serve. Additionally, I have the following set in my .env file, which should also be correct:\n#...\nAPP_DIR=laravel-project\nAPP_URL=http://example.com/laravel-project\n#...\n\nHow can I configure the .htaccess file or Laravel 10 setup to correctly render the homepage at the subfolder URL without having to go to public/ in the URL?\nEdit: The site will be on shared hosting"} +{"id": "000509", "text": "I am creating a migration in Laravel and I need to reference the same table 2 times. I explain:\nThe \"inventory\" table has the \"responsible\" and \"created by\" fields that refer to the user table. The person responsible and the creator may be the same person in certain cases.\nThis is the migration code:\nSchema::create('inventory', function (Blueprint $table) {\n $table->id();\n $table->integer('type');\n $table->string('state', 50);\n $table->timestamps();\n $table->foreignId('user_id')->constrained(\n table: 'user',\n indexName: 'created_inventory_id'\n )->cascadeOnUpdate()->restrictOnDelete();\n $table->foreignId('user_id')->constrained(\n table: 'user',\n indexName: 'responsible_inventory_id'\n )->cascadeOnUpdate()->restrictOnDelete();\n $table->foreignId('area_id')->constrained(\n table: 'area',\n indexName: 'area_inventory_id'\n )->cascadeOnUpdate()->restrictOnDelete();\n });\n\nWhen I run the migration it gives me the following error:\nSQLSTATE[42701]: Duplicate column: 7 ERROR: column \"user_id\" was specified more than once (Connection: pgsql, SQL: create table \"inventory\"\n\nHow could I resolve that?"} +{"id": "000510", "text": "I can not get object data to pass to Blade template with Mccarlosen/Laravel-mpdf Package. The desire is to pass a singe row database record but call to page is met with error - \"undefined variable $agency\"\nTarget blade data syntax is correct as follows -\n{{ $agency->id }}\n\nInstructions per the repo as follows -\n use PDF;\n\nclass ReportController extends Controller \n{\n public function viewPdf()\n {\n $data = [\n 'foo' => 'bar'\n ];\n\n $pdf = PDF::loadView('pdf.document', $data);\n\n return $pdf->stream('document.pdf');\n }\n\n}\n\nIn my controller, I am calling the Object without failure, grabbing the data without failure, and streaming pdf correctly without data (all tested). But when $agency is chained according to example above (per the common Laravel syntax) it fails. I believe this might be secondary to nested structure of data in Laravel object. But not sure. My controller function is below.\npublic function print(string $id) {\n\n // Read a single row of data in the Model\n $agency = Agency::find($id);\n\n // Redirect to the view with the data\n $pdf = PDF::loadView('modules.agencyMain.print',$agency);\n\n return $pdf->stream('agency.pdf');\n\n }\n\nLooking for guidance/answers."} +{"id": "000511", "text": "I want to save image file in the storage folder but when I insert a file in my form and I click on the button, it displays me the error \"Path cannot be empty\".\nPath cannot be empty\n{\"exception\":\"[object] (RuntimeException(code: 0): Path cannot be empty at C:\\\\laragon\\\\www\\\\nextdrive_web\\\\vendor\\\\nyholm\\\\psr7\\\\src\\\\Factory\\\\Psr17Factory.php:41)\n[stacktrace]\n#0 C:\\\\laragon\\\\www\\\\nextdrive_web\\\\vendor\\\\symfony\\\\psr-http-message-bridge\\\\Factory\\\\PsrHttpFactory.php(118): Nyholm\\\\Psr7\\\\Factory\\\\Psr17Factory->createStreamFromFile('')\n#1 C:\\\\laragon\\\\www\\\\nextdrive_web\\\\vendor\\\\symfony\\\\psr-http-message-bridge\\\\Factory\\\\PsrHttpFactory.php(100): Symfony\\\\Bridge\\\\PsrHttpMessage\\\\Factory\\\\PsrHttpFactory->createUploadedFile(Object(Symfony\\\\Component\\\\HttpFoundation\\\\File\\\\UploadedFile))\n#2 C:\\\\laragon\\\\www\\\\nextdrive_web\\\\vendor\\\\symfony\\\\psr-http-message-bridge\\\\Factory\\\\PsrHttpFactory.php(72): Symfony\\\\Bridge\\\\PsrHttpMessage\\\\Factory\\\\PsrHttpFactory->getFiles(Array)\n#3 C:\\\\laragon\\\\www\\\\nextdrive_web\\\\vendor\\\\laravel\\\\framework\\\\src\\\\Illuminate\\\\Routing\\\\RoutingServiceProvider.php(139): Symfony\\\\Bridge\\\\PsrHttpMessage\\\\Factory\\\\PsrHttpFactory->createRequest(Object(Illuminate\\\\Http\\\\Request))\n#4 C:\\\\laragon\\\\www\\\\nextdrive_web\\\\vendor\\\\laravel\\\\framework\\\\src\\\\Illuminate\\\\Container\\\\Container.php(873): Illuminate\\\\Routing\\\\RoutingServiceProvider->Illuminate\\\\Routing\\\\{closure}(Object(Illuminate\\\\Foundation\\\\Application), Array)\n#5 C:\\\\laragon\\\\www\\\\nextdrive_web\\\\vendor\\\\laravel\\\\framework\\\\src\\\\Illuminate\\\\Container\\\\Container.php(758): Illuminate\\\\Container\\\\Container->build(Object(Closure))"} +{"id": "000512", "text": "I'm using Laravel 10 and Laravel Passport and in this project, I tried registering new users like this:\npublic function register(Request $request)\n {\n $request->validate([\n 'name' => 'required|max:255',\n 'email' => 'required|unique:users|max:255',\n 'password' => 'required|min:6'\n ]);\n\n $user = User::create([\n 'name' => $request->name,\n 'email' => $request->email,\n 'password' => Hash::make($request->password)\n ]);\n\n $token = $user->createToken('MyApp')->accessToken;\n\n return response([\n 'token' => $token\n ]);\n }\n\nThen I defined another route which is under api middleware:\nRoute::post('register',[AuthenticationController::class,'register']);\n\nRoute::middleware('auth:api')->group(function() {\n Route::resource('products', ProductController::class);\n});\n\nAnd in the ProductController I tried adding this method for storing new products:\npublic function store(Request $request)\n {\n $request->validate([\n 'title' => 'required|max:255',\n 'description' => 'required|max:255',\n 'price' => 'required'\n ]);\n\n if ($request->user()) {\n Product::create([\n 'title' => $request->title,\n 'description' => $request->description,\n 'price' => $request->price,\n 'user_id' => $request->user()->id\n ]);\n\n return response([\n 'message' => 'product created successfully'\n ],201);\n }else{\n return response(['message' => 'User not authenticated.'], 401);\n }\n }\n\nBut when I test the url in PostMAN which is this:\nhttp://localhost:8000/api/products\n\nI get this message:\n{\n \"message\": \"Unauthenticated.\"\n}\n\nHowever I have copied and pasted the token retrieved from /register rendpoint as Token input of Authorization section:\n\nI also set the Headers to Accept and application/json and sent these form-data as Body:\ntitle:myproduct\ndescription:The production desc\nprice:2000\n\nI don't know why I get this Unauthenticated Message, I also configured the auth.php like this:\n'guards' => [\n 'web' => [\n 'driver' => 'session',\n 'provider' => 'users',\n ],\n 'api' => [\n 'driver' => 'passport',\n 'provider' => 'users',\n ],\n ],\n\nAnd added this Service Provider to app.php:\nLaravel\\Passport\\PassportServiceProvider::class,\n\nAnd used the correct class at User Model:\nuse Laravel\\Passport\\HasApiTokens;\n\nclass User extends Authenticatable\n{\n use HasApiTokens, HasFactory, Notifiable;\n ...\n}\n\nSo what's going wrong here? How can I solve this issue?\n\nUPDATE #1:\nResult of dd($user->createToken('MyApp'));:"} +{"id": "000513", "text": "I created the laravel project and the problem is that the admin users are filling the form along jpg file and the file is being uploaded and being stored in storage directory but where the image aim to be displayed to public users it is not being displayed.\n{{url('/storage/app/news/' . $val2->id . '.png') }}\"\n\nAlthough the file exists at that locations.\nI tried with changing the file reading mode permission for server.\nI have not moved to deployment phase although, the issue is in Laravel development server.\nThe file is not being displayed at web while file is there don't know access issue or what else."} +{"id": "000514", "text": "I'm working with Laravel latest version and I want to show an image like this:\nmed_path) }}\" alt=\"Thumbnail\" style=\"width: 100px; height: auto;\">\n\nAnd when I see the source code it adds this value to src attribute of image tag:\nhttp://localhost:8000/storage/public/medias/jpg/57N8MJmsp2y8P0aexysV_200x200.jpg\nBut the image itself not showing up somehow:\n\nHere is also the value of variable ($media->med_path) in the DB:\npublic/medias/jpg/57N8MJmsp2y8P0aexysV_200x200.jpg\n\nHowever I have already ran the command php artisan storage:link and the links are already exists via this filesystems.php config file:\n'disks' => [\n\n 'local' => [\n 'driver' => 'local',\n 'root' => storage_path('app'),\n 'throw' => false,\n ],\n\n 'public' => [\n 'driver' => 'local',\n 'root' => storage_path('app/public'),\n 'url' => env('APP_URL').'/storage',\n 'visibility' => 'public',\n 'throw' => false,\n ],\n\n ...\n\n 'links' => [\n public_path('storage') => storage_path('app/public'),\n ],\n\nAnd the image already exists in this directory of my project:\n-project\n -storage\n -app\n -public\n -medias\n -jpg\n -57N8MJmsp2y8P0aexysV_200x200.jpg\n\nSo what's going wrong here? How can I solve this issue and show the image properly?"} +{"id": "000515", "text": "I'm trying to cache my routes using php artisan route:cache and it returns a logic exception because it thinks the route name is already used, which isn't the case. Here's the exception i get:\n\nLogicException : Unable to prepare route [registreren] for serialization. Another route has already been assigned name [auth.register].\n\nHere's my primary web.php routing file:\nRoute::name('auth.')->middleware(['basket'])->group(__DIR__ . '/web/auth.php');\nRoute::name('client.')->middleware(['basket'])->group(__DIR__ . '/web/client.php');\nRoute::prefix('admin')->name('admin.')->middleware(['admin'])->group(__DIR__ . '/web/admin.php');\n\nHere's the excerpt of the auth.php routing file which throws the exception:\nRoute::controller(AuthRegisterController::class)->name('register')->middleware('guest')->group(function () {\n Route::get('/registreren', 'view');\n Route::post('/registreren', 'action');\n});\n\n\nI've searched through the routing files if i didn't accidently used the auth.register name twice but i didn't. However when i name the routes in the register group, it works:\nRoute::controller(AuthRegisterController::class)->name('register.')->middleware('guest')->group(function () {\n Route::get('/registreren', 'view')->name('get');\n Route::post('/registreren', 'action')->name('post');\n});\n\nNow i'm wondering, is this expected behaviour? The routing works perfectly without naming the get and post routes:\n\n
\n\nIf this is expected behaviour, then the exception description is very misleading. I am hoping there's another solution to this problem because we've used the same approach for dozens of routes not to mention the hunderds of route calls in our views. Maybe it's a bug or i am doing something else wrong because it would require a huge refactor to name all the routes and route calls. Any thoughts would be very welcome, thanks."} +{"id": "000516", "text": "I have two models:\n\nUser\nPlace\nand a Pivot as:\nRole(user_id, place_id, type)\nand an example of type is \"[\"manager\", \"reception\"]\" and i want to change this value to collection when retriving as:\nPlace::find(1)->users[0]->pivot->type\n\ni add this method but it doesn't work and return string:\n public function getTypeAttribute($value){\n return collect(json_decode($value));\n }"} +{"id": "000517", "text": "I have a issue while trying to mock multi Storage in laravel test enviromnment.\nHere is my my code:\npublic function sftp ( Sibling $sibling ) {\n $file_paths = Storage::build($sibling->config)\n ->files($this->track->track_token);\n Storage::disk('public')\n ->makeDirectory($this->track->track_token);\n foreach ( $file_paths as $file_path ) {\n $file_content = Storage::build($sibling->config)\n ->get($file_path);\n TrackMp3::query()\n ->where('track_id' , $this->track->id)\n ->where('file_name' , basename($file_path))\n ->update([\n 'downloaded_at' => now() ,\n ]);\n Storage::disk('public')\n ->put($file_path , $file_content);\n }\n }\n\nHere is my test case:\n\n\npublic function test_sftp_works_when_track_exists_in_sibling () {\n $track_token = md5('sample');\n $track_320 = UploadedFile::fake()\n ->create('track_320.mp3')\n ->getContent();\n $track_160 = UploadedFile::fake()\n ->create('track_160.mp3')\n ->getContent();\n $track_96 = UploadedFile::fake()\n ->create('track_96.mp3')\n ->getContent();\n $track_demo = UploadedFile::fake()\n ->create('track_demo.mp3')\n ->getContent();\n $sibling = SiblingFactory::new()\n ->create();\n $public_disk = Storage::fake('public');\n $sibling_disk = Storage::fake('sibling');\n Storage::shouldReceive('build')\n ->with($sibling->config)\n ->andReturn($sibling_disk);\n $sibling_disk->put($track_token . '/track_320.mp3' , $track_320);\n $sibling_disk->put($track_token . '/track_160.mp3' , $track_160);\n $sibling_disk->put($track_token . '/track_96.mp3' , $track_96);\n $sibling_disk->put($track_token . '/track_demo.mp3' , $track_demo);\n $track = TrackFactory::new()\n ->md5Fetched()\n ->has(TrackMp3Factory::new([ 'md5' => md5_file($sibling_disk->path($track_token . '/track_320.mp3')) ])\n ->fileName320())\n ->has(TrackMp3Factory::new([ 'md5' => md5_file($sibling_disk->path($track_token . '/track_160.mp3')) ])\n ->fileName160())\n ->has(TrackMp3Factory::new([ 'md5' => md5_file($sibling_disk->path($track_token . '/track_96.mp3')) ])\n ->fileName96())\n ->has(TrackMp3Factory::new([ 'md5' => md5_file($sibling_disk->path($track_token . '/track_demo.mp3')) ])\n ->fileNameDemo())\n ->create([ 'track_token' => $track_token ]);\n\n Storage::fake('public'); // ----> Error happend here\n Artisan::call('download');\n }\n\nHere is the error:\nMockery\\Exception\\BadMethodCallException: Received Mockery_2_Illuminate_Filesystem_FilesystemManager::createLocalDriver(), but no expectations were specified\nC:\\Development\\projects\\track-download-manager\\vendor\\laravel\\framework\\src\\Illuminate\\Support\\Facades\\Facade.php:353\nC:\\Development\\projects\\track-download-manager\\vendor\\laravel\\framework\\src\\Illuminate\\Support\\Facades\\Storage.php:107\nC:\\Development\\projects\\track-download-manager\\tests\\Feature\\DownloadCommandTest.php:55"} +{"id": "000518", "text": "I'm currently develop a web application with Laravel.\nMy database engine is a Microsoft SQL Server.\nFor some data I preferred to generate an uuid.\nWhen I use Windows to run my Laravel app, the uuid format is correct :\n\nA5EE121A-1F10-46FC-B779-49D2A0FA3B68\n\nBut when I ran my Laravel app under linux, and use the same database, the uuid format is like this :\n\nb\"\\x1A\\x12\u00ee\u00a5\\x10\\x1F\u00fcF\u00b7yI\u00d2 \u00fa;h\"\n\nI don't know where is the problem...\nHave you an idea ?\nThanks.\nThe goal is to retrieve the same format when the Laravel app ran under Windows and under Linux."} +{"id": "000519", "text": "I'm currently working on a project using Laravel Filament v3, and I want to implement a feature where a sound plays when a database notification is received. I've successfully set up database notifications using Filament's built-in features, but I'm unsure about how to trigger a sound when a notification is received.\nI've searched for solutions online but haven't found a clear, up-to-date guide specifically for Laravel Filament v3. Could someone provide guidance on how to achieve this sound notification feature in Laravel Filament v3?\nHere's what I've done so far:\nI've set up database notifications for my application.\nI have a notification system that works with the Laravel notification system, and I can send notifications to users.\nWhat I'm missing is the ability to play a sound when a notification is received. I'd like to know how to integrate this audio notification feature into my Laravel Filament v3 project.\nAny code examples or step-by-step instructions would be greatly appreciated. Thanks in advance for your help!"} +{"id": "000520", "text": "I'm facing problem in fetching data from controller to blade.php file they give an error of undefined variable $plans.\nDalyTaskPlanController.php\npublic function dailyTaskPlan()\n {\n $plans = $this->plans = DailyTaskPlan::select(\n 'projects.project_name',\n 'daily_task_plans.*',\n 'tasks.heading',\n 'users.id')\n ->leftJoin('projects', 'daily_task_plans.project_id', '=', 'projects.id')\n ->leftJoin('tasks', 'daily_task_plans.task_id', '=', 'tasks.id')\n ->join('users','daily_task_plans.user_id', '=','users.id')\n ->take(4)\n ->get();\n return view('dailytaskplan::dashboard', compact('plans'));\n}\n\nin view file they give an error undefined variable\n @foreach ($plans as $plan)\n \n \n {{ $plan->project_name }}\n {{ $plan->heading }}\n {{ $plan->date }}\n {{ $plan->estimate_hours }} hrs {{ $plan->estimate_minutes }} mins\n {{ $plan->memo }}\n \n @endforeach\n\nAdditional question; how can I merge Laravel module route with project route because when I use module route they override project route?\ni tried to make traits folder and give another route for module in other route they working fine. but in main route it's not working."} +{"id": "000521", "text": "im new on livewire and im got stuck from it.\n\"i want to get 'id' from route and fetch it using livewire component (function) to get data from by id in the database, and get error about parameter 0 or somethink, here the error :\nUnable to resolve dependency [Parameter #0 [ $id ]] in class App\\Livewire\\StartExams\n\ni know if it using laravel i can get te data, but when im using livewire got error\nhere my button :\nexams->id) }}\"\n wire:click=\"mount({{ $getExamsOnLecture->exams->id }})\">\n \n \n {{ $getExamsOnLecture->exams->name }}\n \n\n\nmy route:\nuse App\\Livewire\\StartExams;\nRoute::get('/exams/{any}/start', StartExams::class)->name('start-quiz');\n\nhere the livewire controller:\nnamespace App\\Livewire;\nuse App\\Models\\Questions;\nuse App\\Models\\QuestionsOption;\nuse Livewire\\Component;\n\nclass StartExams extends Component\n{\n public $id;\n public $questions = [];\n public $questionIndex = 0;\n public $question;\n public $answer;\n public $questionOption;\n\n public bool $finished = false;\n\n public function mount($id)\n {\n $this->id = $id;\n $this->questions = Questions::where('exam_id', $this->id)->get();\n\n foreach ($this->questions as $key => $questionOption) {\n $this->questionOption[$key]['options'] = QuestionsOption::where('question_id', $questionOption->id)->get();\n // return $this->questions[$key]['options'];\n }\n\n }\n\n public function resetRadioOptions()\n {\n $this->answer = null;\n }\n\n public function submitQuiz()\n {\n //otw making save to database ///\n\n $this->questionIndex++;\n if ($this->questionIndex >= count($this->questions)) {\n $this->finished = true;\n } else {\n $this->question = $this->questions[$this->questionIndex];\n $this->resetRadioOptions();\n }\n }\n public function render()\n {\n return view('livewire.start-exams', [\n 'id' => $this->id,\n ])->layout('layouts.auth');\n }\n}\n\n\ni thought at first it same as laravel so im ignore it and doing other file than this, then im im going to fix it so i can get the data based bny id, it got error..\nim already trying searching on google and even AI but its failed, so im posting here..\ncan someone help me? just giving me step by step that i must fixed.\ni want the id get it form id from tag route so i cane get data based by id\n$this->questions = Questions::where('exam_id', $this->id)->get();\n\nbut if it like this (work, cause im put number instead id)\n$this->questions = Questions::where('exam_id', '1')->get();\n\ni can get the record, but based by $id its got error.."} +{"id": "000522", "text": "I'm a beginner with Laravel and inertia.\nI use Laravel 10 with Inertia and react.\nWhen I go to the index page, the field \"$this->typeEducation->title\" is filled, but when I click edit, the field is empty. I then get the error message: \"Attempt to read property \"title\" on null\"\nThe model:\nclass Education extends Model\n{\n use HasFactory;\n\n protected $fillable = [\n 'title',\n 'education_type_id',\n 'is_active',\n 'start_date',\n 'end_date',\n ];\n\n public function typeEducation() {\n return $this->belongsTo(EducationType::class, 'education_type_id', 'id');\n }\n}\n\nThe resource:\nclass EducationResource extends JsonResource\n{\n /**\n * Transform the resource into an array.\n *\n * @return array\n */\n public function toArray(Request $request): array\n {\n return [\n 'id' => $this->id,\n 'title' => $this->title,\n 'type' => $this->typeEducation->title,\n 'isActive' => $this->is_active,\n 'startDate' => $this->start_date,\n 'endDate' => $this->end_date,\n 'educationTypes' => EducationTypeResource::collection($this->whenLoaded('educationTypes'))\n ];\n }\n}\n\nThe Controller\nclass EducationController extends Controller\n{\n /**\n * Display a listing of the resource.\n */\n public function index(): Response\n {\n return Inertia::render('School/Education/EducationIndex', [\n 'education' => EducationResource::collection(Education::all())\n ]);\n }\n\n /**\n * Show the form for editing the specified resource.\n */\n public function edit(Education $education): Response\n {\n $education->load(['typeEducation']);\n return Inertia::render('School/Education/Edit', [\n 'education' => new EducationResource($education),\n 'educationTypes' => EducationTypeResource::collection(EducationType::all())\n ]);\n }\n}\n\nWhat am I doing wrong?"} +{"id": "000523", "text": "I am building a laravel project with filament.\nEverything worked normally, but after working on translations in Laravel, the Fillament translations stopped working.\nHas anyone ever experienced this?\nI published fillament and laravel translation files already. Also published fillament views but could not find/replace the labels to test.\nMaybe I need to override Laravel translations to Filament translations? Is that even possible to do and not break the project?\nI will insert the before and after images."} +{"id": "000524", "text": "I already have this app/Http/Middleware/Authenticate.php\nexpectsJson() ? null : route('home');\n }\n}\n\nThe problem is that sometime Laravel in case user is not authenticated is traveling trought this:\nvendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Exceptions\\Handler .php:\u2009570\n protected function unauthenticated($request, AuthenticationException $exception)\n\n {\n\n return $this->shouldReturnJson($request, $exception)\n\n ? response()->json(['message' => $exception->getMessage()], 401)\n\n : redirect()->guest($exception->redirectTo() ?? route('login'));\n\n }\n\nWhere can I override this!?! I have no 'login' named route ! I need to return to route named 'home'."} +{"id": "000525", "text": "I have been working on rest api and now I am working on some tests, but since I want to use slug on my model as primary key (usual setup with getKeyName) I have strange behaviour. For example for this create method:\n$modelInstance = Model::create([\n 'title' => 'Old Title',\n 'slug' => Str::slug('Old Title'),\n 'description' => 'Old description',\n 'rating' => 4.2,\n]);\n\nI get these results when dump test:\n#original: array:6 [\n \"title\" => \"Old Title\"\n \"slug\" => 9\n \"description\" => \"Old description\"\n \"rating\" => 4.2\n \"updated_at\" => \"2023-11-17 08:45:08\"\n \"created_at\" => \"2023-11-17 08:45:08\"\n]\n#changes: []\n#casts: []\n#classCastCache: []\n#attributeCastCache: []\n\nThen when I comment this:\npublic function getKeyName(): string\n{\n return 'slug';\n}\n\npart in my model and run the same test I get this:\n#original: array:7 [\n \"title\" => \"Old Title\"\n \"slug\" => \"old-title\"\n \"description\" => \"Old description\"\n \"rating\" => 4.2\n \"updated_at\" => \"2023-11-17 08:47:31\"\n \"created_at\" => \"2023-11-17 08:47:31\"\n \"id\" => 9\n]\n#changes: []\n\nI haven't worked much with Laravel 10 lately, but is there something I am missing and I should be aware of?"} +{"id": "000526", "text": "Using Laravel's Lumen 9.1.6\nI am following the documentation but still have a problem with an optional trailing route parameter defined like this:\nroutes/web.php\n$router->group(['prefix' => 'question'], function() use($router) { \n $router->get('log/{eid}/{uid}[/{year}]', ['middleware' => 'api.auth', 'uses' => 'QuestionController@getLogs']);\n});\n\nand then in QuestionController.php\npublic function getLogs($eid, $uid, $year = null, Request $request) {\n ....\n}\n\nIf I then call\napi.tld/question/log/1/2/2021 - it works fine, however\napi.tld/question/log/1/2 - throws Unable to resolve dependency [Parameter #2 [ $year ]] in class App\\Http\\Controllers\\QuestionController\nLumen docs are very sparse on this (although I believe I've followed the syntax correctly). Any ideas?"} +{"id": "000527", "text": "I have an items json array column in a Catalog eloquent model that contains itemable_type and itemable_id properties, it looks like this [{\"itemable_id\": 1, \"qty\": 1, \"itemable_type\":\"\\\\App\\\\Models\\\\Food\"}], can I use this information to manually initialize a One To Many Polymorphic relationships?\nBasically my database looks like this:\n\n\n\n\nID\nitems\n...\n\n\n\n\nInt\nJson (Eg. [{\"itemable_id\": 1, \"qty\": 1, \"itemable_type\":\"\\App\\Models\\Food\"}])\n...\n\n\n\n\nI am able a make a regular Eloquent Collection with something like this:\n\nclass VendorCatalog extends Model\n{\n public function catalogable(): \\Illuminate\\Database\\Eloquent\\Builder\n {\n $items = \\Illuminate\\Database\\Eloquent\\Collection::make();\n\n $models = $this->items\n ->filter(fn ($i) => !empty($i['type']))\n ->filter(fn ($i) => class_exists(str(\"\\\\App\\\\Models\\\\\")->append($i['type'])->toString()))\n ->groupBy('type')->map(function ($items, $group) {\n $class = str(\"\\\\App\\\\Models\\\\\")->append($group)->toString();\n return $class::whereIn('id', $items->pluck('id'))->get();\n });\n\n return $items->merge($models->flatten())->toQuery();\n }\n}\n\nThis works, but considering that the relationship might contain unrelated models, a One To Many Polymorphic relationship would be better suited.\nSo it possible or should I go ahead and just add every item as a distinct record instead of storing IDs and types?\nI know this is a complex requirement, but hey! Who doesn't want a little challenge? Also, this will make my database structure a little less cluttered and more manageable.\nEDIT\nAlso, I realised that if I call the getQuery() method on the catalogable I get this error Unable to create query for collection with mixed types\n$catalogable = VendorCatalog::first()->catalogable()->getQuery();"} +{"id": "000528", "text": "This is the Model where I insert, am I doing it correctly?\npublic static function insertPlotDetails($data){\n $PlotDetails = array();\n $PlotDetails['L_Name'] = $data['Plot'];\n $PlotDetails['L_Owner'] = $data['Owner'];\n $PlotDetails['L_Spice'] = $data['Spice'];\n $PlotDetails['L_AreaAcre'] = $data['Area'];\n $PlotDetails['L_AreaHa'] = 0;\n $PlotDetails['L_ProducedAmt_Kg'] = $data['PdAmount'];\n $PlotDetails['L_District'] = $data['District'];\n $PlotDetails['L_Block'] = $data['Block'];\n $PlotDetails['L_Village'] = $data['Village'];\n $PlotDetails['lat'] = $data['Latitude'];\n $PlotDetails['lng'] = $data['Longitude'];\n $PlotDetails['img'] = $data['ImageUpload'];\n $PlotDetails['PdArea_JSON'] = json_encode([\"'TotalProduction': {'PlotName': '\".$data['Plot'].\"','Area':[{'Name': 'Main','Variant':[{'IISR_Varada': ''\".$data['IISR_Varada'].\"'','IISR_Mahima': '\".$data['IISR_Mahima'].\"','Nadia_Nagaland': '\".$data['Nadia_Nagaland'].\"','Nadia_Odisha': '\".$data['Nadia_Odisha'].\"','Karbi': '\".$data['Karbi'].\"','Local': '\".$data['Local'].\"' } ] },{'Name': 'Trial','Variant': [ {'IISR_Varada': '\".$data['T_IISR_Varada'].\"','IISR_Mahima': '\".$data['T_IISR_Mahima'].\"','Nadia_Nagaland': '\".$data['T_Nadia_Nagaland'].\"','Nadia_Odisha': '\".$data['T_Nadia_Odisha'].\"','Karbi': '\".$data['T_Karbi'].\"','Local': '\".$data['T_Local'].\"' } ] } ]} }\"]);\n\n \n\n try {\n\n $PlotDetails_response = DB::table('locationdb')->insert($PlotDetails);\n \n \n } catch (\\Exception $e) {\n \n return $e->getMessage();\n }\n // \n \n return $PlotDetails_response;\n\n}\n\nThis is how I get The Data and try to access the JSON.\npublic function GetJsonTest()\n{\n $data= MapModel::GetJsonTest();\n \n $json = $data[0]->PdArea_JSON;\n $array = json_decode($json, true);\n dd($json->TotalProduction->);\n\n}\n\nMy Intention was to be able to read the index of like 'Name' and 'Variety'. It is outputting as string and I cannot find a way to convert it back to array or Object so that I can read and display.\n[\"'TotalProduction': {'PlotName': 'TestJson','Area':[{'Name': 'Main','Variant':[{'IISR_Varada': ''10'','IISR_Mahima': '12','Nadia_Nagaland': '13','Nadia_Odisha': '14','Karbi': '15','Local': '16' } ] },{'Name': 'Trial','Variant': [ {'IISR_Varada': '11','IISR_Mahima': '12','Nadia_Nagaland': '13','Nadia_Odisha': '14','Karbi': '15','Local': '16' } ] } ]} }\"]"} +{"id": "000529", "text": "I'm using Laravel 10, this code is in my web.php file\n...\nRoute::resource('photos', PhotoController::class);\n....\n\nand it produce this list of routes:\nphotos ............................ photos.index \u203a Admin\\PhotoController@index \nphotos ............................ photos.store \u203a Admin\\PhotoController@store \nphotos/create ..................... photos.create \u203a Admin\\PhotoController@create \nphotos/{photo} .................... photos.show \u203a Admin\\PhotoController@show \nphotos/{photo} .................... photos.update \u203a Admin\\PhotoController@update \nphotos/{photo} .................... photos.destroy \u203a Admin\\PhotoController@destroy \nphotos/{photo}/edit ............... photos.edit \u203a Admin\\PhotoController@edit \n\nit all works fine. Now, I want to protect with authentication all this page, and i use this code\nRoute::prefix('admin')->middleware('auth')->group(function () {\n Route::get('photos', PhotoController::class);\n});\n\nI got this error:\n Target class [PhotoController] does not exist.\n\nso i add: ->namespace('App\\Http\\Controllers\\Admin')\nRoute::prefix('admin')->namespace('App\\Http\\Controllers\\Admin')->middleware('auth')->group(function () {\n Route::resource('photos', PhotoController::class);\n});\n\nnow the site works, but the command php artisan route:list says:\n Class \"PhotoController\" does not exist \n\nI have to specify:\nuse App\\Http\\Controllers\\Admin\\PhotoController;\n\nThe question is.. which is the correct way? use namespace or indicate it with ->namespace()?"} +{"id": "000530", "text": "Using Laravel 10 and VS Code this line was considered error:\nAuth::routes();\n\nVS Code says:\nUndefined type 'Auth'.intelephense(P1009)\n\nBut no problem in the webapp.\nI've already tried to do:\nCTRL + SHIFT + P --> digit \"Index workspace\" --> Enter\n\nbut it doesnt work"} +{"id": "000531", "text": "I am using Laravel 10 with Livewire v3. I have a method that accepts array as one of the parameter. If the parameter is just an array then the method will behave differently, if the parameter is associative array then the method will use the keys and values both and behave differently. The passed associative array can have sequential keys as shown in the code below.\nBelow is the extracted logic and I don't want it to detect $associative_sequential array as a sequential array. What do I need to add/change for that to happen because I don't want $sequential array to be identified as associative array either.\npublic function doSomething($passedArray) {\n if (is_array($passedArray)) {\n if ($passedArray == array_values($passedArray)) {\n echo \"The passed array is a sequential array.\\n\";\n } elseif (array_keys($passedArray) == array_filter(array_keys($passedArray), 'is_int')) {\n echo \"The passed array is an associative array with integer keys.\\n\";\n } else {\n echo \"The passed array is an associative array with non-integer keys.\\n\";\n }\n } else {\n echo \"The passed variable is not an array.\\n\";\n }\n}\n\n$sequential = array(\"apple\", \"banana\", \"cherry\");\n$associative_sequential = array(0 => \"apple\", 1 => \"banana\", 2 => \"cherry\");\n$associative_non_sequential = array(10 => \"apple\", 20 => \"banana\", 25 => \"cherry\");\n$strict_associative = array(\"fruit1\" => \"apple\", \"fruit2\" => \"banana\", \"fruit3\" => \"cherry\");\n\ndoSomething($sequential);\ndoSomething($associative_sequential);\ndoSomething($associative_non_sequential);\ndoSomething($strict_associative);\n\nOutputs:\n// This is what I am getting\nThe passed array is a sequential array.\nThe passed array is a sequential array.\nThe passed array is an associative array with integer keys.\nThe passed array is an associative array with non-integer keys.\n\n// This is what I want to happen\nThe passed array is a sequential array.\nThe passed array is an associative array with integer keys.\nThe passed array is an associative array with integer keys.\nThe passed array is an associative array with non-integer keys."} +{"id": "000532", "text": "In a Laravel unit test we are testing and mocking ReCaptcha\\ReCaptcha (and NZTim\\Mailchimp\\Mailchimp). The working test code is as follows:\n$this->mock(\\ReCaptcha\\ReCaptcha::class, function ($mock) use ($mock_response) {\n $mock->shouldReceive('make')\n ->once()\n ->andReturn($mock);\n $mock->shouldReceive('verify')\n ->once()\n ->andReturn($mock_response);\n });\n\nIn the controller, we have:\n$recaptcha = app(ReCaptcha::class)->make();\n\nTo get this to work, as the ReCaptcha class requires a constructor variable, we created the following factory:\n// ReCaptchaFactory.php\n\npublic function make()\n{\n $secret = config('services.recaptcha.secret');\n return new ReCaptcha($secret);\n}\n\nWithout the required constructor variable, we can accomplish this with 1 line and remove the extra factory and AppServiceProvider updates:\n$recaptcha = app(ReCaptcha::class);\n\nSomething like the following?\n$recaptcha = resolve(ReCaptcha::class, ['param1'=>config('services.recaptcha.secret'));\n\nIt just feels like we have a lot of extra unnecessary code."} +{"id": "000533", "text": "When I try to access an endpoint in my Laravel API, Jetstream redirects to the dashboard page. I am already logged in, and when I go to my endpoint from the dashboard, it goes back to the dashboard. I made my application without Jetstream, then I made a new project and copied my code to it (controllers, models, policies, etc.) I am using Laravel 10 and Jetstream 4. Here is my web.php:\nRoute::get('/', function () {\n return view('welcome');\n});\n\nRoute::middleware([\n 'auth:sanctum',\n config('jetstream.auth_session'),\n 'verified',\n])->group(function () {\n Route::get('/dashboard', function () {\n return view('dashboard');\n })->name('dashboard');\n \n});\n\napi.php (I am trying to get to the assets route)\nRoute::middleware('auth:sanctum')->get('/user', function (Request $request) {\n return $request->user();\n});\n\nRoute::group(['namespace' => '\\App\\Http\\Controllers\\Api', 'middleware' => 'auth:sanctum', config('jetstream.auth_session'),\n'verified',], function() {\n Route::apiResource('users', UserController::class)->names('users');\n Route::apiResource('assets', AssetController::class)->names('assets');\n Route::apiResource('events', EventController::class)->names('events');\n\n Route::post('assets/bulk', ['uses' => 'App\\Http\\Controllers\\Api\\AssetController@bulkStore']);\n});\n\nI added the config('jetstream.auth_session') part because it's used in web.php. It didn't seem to make any difference.\nHere is part of the page with the link that I tried to click on (resources/views/navigation-menu.blade.php)\n \n
\n routeIs('dashboard')\">\n {{ __('Dashboard') }}\n \n
\n
\n routeIs('assets')\">\n {{ __('Assets') }}\n \n
\n \n\nI don't know what other files are relevant to this issue. I don't have much experience with Laravel, and this is my first time using Jetstream. I read an article about how to redirect to a different route, but I don't want to do that. How do I make it go to my API endpoint without redirecting?\nI tried clicking on the \"Assets\" link in the navigation menu at the dashboard route. I was expecting it to show a blank page, but it just redirected back to the dashboard. I also tried php artisan route:clear and doing Empty Cache and Hard Reload in Chrome, but I got the same result. Here is what the network tab of my developer tools looks like:\nThere's a 302 response on assets and login.The one on login comes from the assets endpoint. THere's a 200 on dashboard coming from login."} +{"id": "000534", "text": "I'm trying to run Laravel but I get this error \"Target class [set_locale] does not exist.\". This problem appeared when transferring from Laravel 7 to 10. What could this be related to? What files need to be provided to make it clear what the problem is?\nroutes web.php\nuse App\\Http\\Controllers;\n/*\n|--------------------------------------------------------------------------\n| Web Routes\n|--------------------------------------------------------------------------\n|\n| Here is where you can register web routes for your application. These\n| routes are loaded by the RouteServiceProvider within a group which\n| contains the \"web\" middleware group. Now create something great!\n|\n*/\n\nAuth::routes([\n 'reset' => false,\n 'confirm' => false,\n 'verify' => false,\n]);\n\nRoute::get('locale/{locale}', 'MainController@changeLocale')->name('locale');\nRoute::get('currency/{currencyCode}', 'MainController@changeCurrency')->name('currency');\nRoute::get('/logout', 'Auth\\LoginController@logout')->name('get-logout');\n\nRoute::middleware(['set_locale'])->group(function () {\n Route::get('reset', 'ResetController@reset')->name('reset');\n\n Route::middleware(['auth'])->group(function () {\n Route::group([\n 'prefix' => 'person',\n 'namespace' => 'Person',\n 'as' => 'person.',\n ], function () {\n Route::get('/orders', 'OrderController@index')->name('orders.index');\n Route::get('/orders/{order}', 'OrderController@show')->name('orders.show');\n });\n\n Route::group([\n 'namespace' => 'Admin',\n 'prefix' => 'admin',\n ], function () {\n Route::group(['middleware' => 'is_admin'], function () {\n Route::get('/orders', 'OrderController@index')->name('home');\n Route::get('/orders/{order}', 'OrderController@show')->name('orders.show');\n });\n\n Route::resource('categories', 'CategoryController');\n Route::resource('products', 'ProductController');\n Route::resource('products/{product}/skus', 'SkuController');\n Route::resource('properties', 'PropertyController');\n Route::resource('merchants', 'MerchantController');\n Route::get('merchant/{merchant}/update_token', 'MerchantController@updateToken')->name('merchants.update_token');\n Route::resource('coupons', 'CouponController');\n Route::resource('properties/{property}/property-options', 'PropertyOptionController');\n });\n });\n\n\n Route::get('/', 'MainController@index')->name('index');\n Route::get('/categories', 'MainController@categories')->name('categories');\n Route::post('subscription/{skus}', 'MainController@subscribe')->name('subscription');\n\n Route::group(['prefix' => 'basket'], function () {\n Route::post('/add/{skus}', 'BasketController@basketAdd')->name('basket-add');\n\n Route::group([\n 'middleware' => 'basket_not_empty',\n ], function () {\n Route::get('/', 'BasketController@basket')->name('basket');\n Route::get('/place', 'BasketController@basketPlace')->name('basket-place');\n Route::post('/remove/{skus}', 'BasketController@basketRemove')->name('basket-remove');\n Route::post('/place', 'BasketController@basketConfirm')->name('basket-confirm');\n Route::post('coupon', 'BasketController@setCoupon')->name('set-coupon');\n });\n });\n\n Route::get('/{category}', 'MainController@category')->name('category');\n Route::get('/{category}/{product}/{skus}', 'MainController@sku')->name('sku');\n});\n\nkernel.php\nnamespace App\\Http;\n\nuse Illuminate\\Foundation\\Http\\Kernel as HttpKernel;\n\nclass Kernel extends HttpKernel\n{\n /**\n * The application's global HTTP middleware stack.\n *\n * These middleware are run during every request to your application.\n *\n * @var array\n */\n protected $middleware = [\n \\App\\Http\\Middleware\\TrustProxies::class,\n \\App\\Http\\Middleware\\CheckForMaintenanceMode::class,\n \\Illuminate\\Foundation\\Http\\Middleware\\ValidatePostSize::class,\n \\App\\Http\\Middleware\\TrimStrings::class,\n \\Illuminate\\Foundation\\Http\\Middleware\\ConvertEmptyStringsToNull::class,\n ];\n\n /**\n * The application's route middleware groups.\n *\n * @var array\n */\n protected $middlewareGroups = [\n 'web' => [\n \\App\\Http\\Middleware\\EncryptCookies::class,\n \\Illuminate\\Cookie\\Middleware\\AddQueuedCookiesToResponse::class,\n \\Illuminate\\Session\\Middleware\\StartSession::class,\n // \\Illuminate\\Session\\Middleware\\AuthenticateSession::class,\n \\Illuminate\\View\\Middleware\\ShareErrorsFromSession::class,\n \\App\\Http\\Middleware\\VerifyCsrfToken::class,\n \\Illuminate\\Routing\\Middleware\\SubstituteBindings::class,\n ],\n\n 'api' => [\n 'throttle:60,1',\n 'bindings',\n ],\n ];\n\n /**\n * The application's route middleware.\n *\n * These middleware may be assigned to groups or used individually.\n *\n * @var array\n */\n protected $routeMiddleware = [\n 'auth' => \\App\\Http\\Middleware\\Authenticate::class,\n 'is_admin' => \\App\\Http\\Middleware\\CheckIsAdmin::class,\n 'basket_not_empty' => \\App\\Http\\Middleware\\BasketIsNotEmpty::class,\n 'auth.basic' => \\Illuminate\\Auth\\Middleware\\AuthenticateWithBasicAuth::class,\n 'bindings' => \\Illuminate\\Routing\\Middleware\\SubstituteBindings::class,\n 'cache.headers' => \\Illuminate\\Http\\Middleware\\SetCacheHeaders::class,\n 'can' => \\Illuminate\\Auth\\Middleware\\Authorize::class,\n 'guest' => \\App\\Http\\Middleware\\RedirectIfAuthenticated::class,\n 'password.confirm' => \\Illuminate\\Auth\\Middleware\\RequirePassword::class,\n 'signed' => \\Illuminate\\Routing\\Middleware\\ValidateSignature::class,\n 'throttle' => \\Illuminate\\Routing\\Middleware\\ThrottleRequests::class,\n 'verified' => \\Illuminate\\Auth\\Middleware\\EnsureEmailIsVerified::class,\n 'set_locale' => \\App\\Http\\Middleware\\SetLocale::class,\n ];\n\n /**\n * The priority-sorted list of middleware.\n *\n * This forces non-global middleware to always be in the given order.\n *\n * @var array\n */\n protected $middlewarePriority = [\n \\Illuminate\\Session\\Middleware\\StartSession::class,\n \\Illuminate\\View\\Middleware\\ShareErrorsFromSession::class,\n \\App\\Http\\Middleware\\Authenticate::class,\n \\Illuminate\\Routing\\Middleware\\ThrottleRequests::class,\n \\Illuminate\\Session\\Middleware\\AuthenticateSession::class,\n \\Illuminate\\Routing\\Middleware\\SubstituteBindings::class,\n \\Illuminate\\Auth\\Middleware\\Authorize::class,\n ];\n}\n\nControllers > middleware SetLocal.php\nnamespace App\\Http\\Middleware;\n\nuse Closure;\nuse Illuminate\\Support\\Facades\\App;\n\nclass SetLocale\n{\n /**\n * Handle an incoming request.\n *\n * @param \\Illuminate\\Http\\Request $request\n * @param \\Closure $next\n * @return mixed\n */\n public function handle($request, Closure $next)\n {\n $locale = session('locale');\n App::setLocale($locale);\n return $next($request);\n }\n}"} +{"id": "000535", "text": "hello I'm new for livewire 3 I have table inside form, data table having radio button for yes or no\nafter selecting yes or no I'm going to submit that form that time I want to store fetch detail like id and name along this. now I'm any getting radio button value, me to solve\nform page\n\nmy code form\n
\n \n
\n
\n \n \n \n \n \n \n \n \n \n \n \n @foreach ($UserDetail as $key=>$UserDetails) \n @php $ID = 0+$key @endphp\n \n \n \n \n \n \n @endforeach \n \n
SLID NoNAMEYESNO
\n {{ $key + 1 }}\n \n {{ $UserDetails->id }}\n \n {{ $UserDetails->name }}\n \n
\n \n \n
\n
\n
\n \n
\n
\n
\n
\n
\n
\n \n
\n \n
\n\nController\nclass StudentAttendance extends Component\n{\npublic $name; \npublic $id; \npublic $TableInput = [];\n\n\npublic function mount()\n{\n $this->User = User::all(); \n}\n\npublic function Save() \n{ \n $bel = Data::create([ \n \n 'Id' => $value['Id'],\n 'name' => $value['name'],\n 'data' => $value['data'],\n ]);\n} \n} \n\n}"} +{"id": "000536", "text": "I am trying to set up localization on my Laravel Inerita (Vue.js). I know about https://github.com/mcamara/laravel-localization, but this does not support Inertia (at least I was not successful with running this on my Vue.js file) {{ __(\"text\") }} does not work in inertia error: TypeError: _ctx.__ is not a function.\nAnyway, I am using a different localization package called laravel-vue-i18n.\nI am successful in using this on Vue.js, but I am having problems when setting the locale based on URL. How do I set my routes/middleware to use a nullable locale (en as default)?\nFile web.php\n// Can be nullable locale?\nRoute::middleware(['setLocale'])->prefix('{locale?}')->group(function () {\n Route::resource('posts', PostController::class);\n Route::resource('comments', CommentController::class);\n\n});\n\nFile SetLocaleMiddleware.php\nclass SetLocaleMiddleware\n{\n public function handle($request, Closure $next, $locale = 'en')\n {\n \\Log::info($locale); // Always logs as 'en' even if I add 'ja' in the URL\n \\Log::info($request->route('locale')); // Locale or whatever is the text after localhost/ for some reason\n\n if (!in_array($locale, ['en', 'ja'])) {\n abort(400);\n }\n\n App::setLocale($locale);\n\n return $next($request);\n }\n}\n\nFile app/Kernel.php\nprotected $middlewareAliases = [\n 'setLocale' => \\App\\Http\\Middleware\\SetLocaleMiddleware::class,\n];\n\nExpected results:\n// Set application language to Japanese\nhttp://localhost/ja\nhttp://localhost/ja/posts\nhttp://localhost/ja/comments\n\n// Set application language to English as default\nhttp://localhost\nhttp://localhost/posts\nhttp://localhost/comments\n\nNote: it does not have to be middleware."} +{"id": "000537", "text": "I am having a issue hitting an endpoint in my API and getting the correct JSON response in a firefox.\nI am attempting to hit https://...locationtypes/1 and getting this JSON response:\n{\n \"data\": {\n \"id\": null,\n \"name\": null\n }\n}\n\nHere is a response from hitting https://...locationtypes:\n{\n \"data\": [\n {\n \"id\": 1,\n \"name\": \"Event\"\n },\n {\n \"id\": 2,\n \"name\": \"Vendor\"\n },\n {\n \"id\": 3,\n \"name\": \"Utility\"\n }\n}\n\nLocationType.php:\nhasMany(Location::class);\n }\n\npublic function resolveRouteBinding($value, $field = null)\n {\n return $this->where('id', $value)->firstOrFail();\n }\n}\n\n\nLocationTypeController.php (show and store method):\npublic function store(StoreLocationTypeRequest $request)\n {\n return new LocationTypeResource(LocationType::create($request->all()));\n }\n\npublic function show(LocationType $locationType)\n {\n $includeLocations = request()->query('includeLocations');\n\n if ($includeLocations) {\n return new LocationTypeResource($locationType->loadMissing('locations'));\n }\n return new LocationTypeResource($locationType);\n }\n\nLocationTypeResource.php:\nclass LocationTypeResource extends JsonResource\n{\n /**\n * Transform the resource into an array.\n *\n * @return array\n */\n public function toArray(Request $request): array\n {\n return [\n 'id' => $this->id,\n 'name' => $this->name,\n 'locations' => LocationResource::collection($this->whenLoaded('locations'))\n ];\n }\n}\n\napi.php:\nRoute::group(['prefix' => 'v1'], function () {\n // CRUD\n Route::apiResource('events', EventController::class);\n Route::apiResource('info', InfoController::class);\n Route::apiResource('locations', LocationController::class);\n //Route::apiResource('locationtypes', LocationTypeController::class);\n Route::get('locationtypes', [LocationTypeController::class, 'index']);\n Route::get('locationtypes/{id}', [LocationTypeController::class, 'show']);\n\n // Bulk POST\n Route::post('events/bulk', ['uses'=>'EventController@bulkStore']);\n Route::post('info/bulk', ['uses'=>'InfoController@bulkStore']);\n Route::post('locations/bulk', ['uses'=>'LocationController@bulkStore']);\n Route::post('locationtypes/bulk', ['uses'=>'LocationTypeController@bulkStore']);\n\n});\n\nI apologize if I have forgotten something, I am new to Laravel still. Also, this is Laravel 10. To clarify the problem, I cannot figure out why the id endpoint is spewing null back at me.\nEDIT: I was able to use Route::get(locationtypes/{locationType}) to obtain the desired output by using the 'name' field but I am still unable to use the 'id' field and get null as output. I edited the above code to display my changes."} +{"id": "000538", "text": "I'm trying to get groupBy value and it giving successfully, after this I'm need only group list array\nhow to archive that.\nmy Code\n$this->StudentList = Student::get();\n$data = $this->StudentList->sortBy('date', SORT_NATURAL)->groupby('date');\n \n($data);\n\nGroupBy Result\narray:14 [\u25bc \n \"2024-01-01\" => Illuminate\\Database\\Eloquent\\Collection {#1977 \u25b6}\n \"2024-01-02\" => Illuminate\\Database\\Eloquent\\Collection {#1978 \u25b6}\n \"2024-01-03\" => Illuminate\\Database\\Eloquent\\Collection {#1979 \u25b6}\n \"2024-01-04\" => Illuminate\\Database\\Eloquent\\Collection {#1980 \u25b6}\n \"2024-01-05\" => Illuminate\\Database\\Eloquent\\Collection {#1981 \u25b6}\n]\n\nneed this group date like this in array\narray:14 [\u25bc \n \"2024-01-01\" \n \"2024-01-02\" \n \"2024-01-03\" \n \"2024-01-04\" \n \"2024-01-05\" \n]\n\nlaravel get groupBy name list"} +{"id": "000539", "text": "I have following models in Laravel:\n\nUser Model\nTeam Model\nCompany Model\n\nAnd DB tables:\nusers\n- id\n\ncompany\n- id\n\nteams\n- id\n- company_id (team can only belong to one company)\n\n// pivot tables\ncompany_user\n- user_id\n- company_id\n\nteam_user\n- team_id\n- user_id\n\nI'm trying to create relationship on User Model that will get Company With teams that only belongs to that user. I can make it with joins, but I would like to have a relationship. I have tried different packages, but still can't figure it out. It's just sad 8 hours."} +{"id": "000540", "text": "I have two models, List and Item, with a many-to-many relationship. Since I don\u2019t care about getting the lists an item is in (just the items in the list), my models only define half the relationship.\nclass App\\Models\\List extends Model\n{\n protected function items(): Illuminate\\Database\\Eloquent\\Relations\\BelongsToMany\n { /*...*/ }\n}\n\nThen I have a route, Route::post('/list/{item}', [ListController::class, 'item']) to which a form submits an item and some other data. The controller is responsible for retrieving the correct List model and storing the relationship & details.\nclass App\\Http\\Controllers\\ListController extends Controller\n{\n public function item($request, App\\Models\\Item $item)\n {\n /* request validation */\n\n $list = self::current();\n\n $list->items()->detach($item->id);\n $list->items()->attach($item->id, [/* other data */]);\n\n $list->save();\n\n return back();\n }\n\n static function current(): App\\Models\\List\n { /*...*/ }\n}\n\nBut whenever I post to the route, I get an error saying the static items() method is not defined:\n\nCall to undefined method App\\Models\\List::items()\n\nChecks\n\nDumping the result of ListController::current() shows an instance of App\\Models\\List. This instance does have data from the database (where available).\n\nI have verified that BelongsToMany is the correct relationship type to return.\n\nI can return Item instances from a List instance without accessing the pivot table directly, meaning my table schema matches my model definitions.\n\nI have verified that the request and route are passing data into the controller as expected.\n\n\nI started with ->syncWithoutDetaching(), but it returns the same error. I switched to ->attach() and ->detach() for simplicity."} +{"id": "000541", "text": "When using contextual information if logging in Laravel v10 (https://laravel.com/docs/10.x/logging#contextual-information), the contextual information doesn't get merged when specifying the channel name.\ne.g.: This works fine:\nLog::info(\n 'User {user} created Thing {id}', \n [\n 'user' => auth()->user()->id,\n 'id' => $thing->id\n ]\n);\n\n...produces:\n[2024-01-31 08:44:23] local.INFO: User 1 created Thing 3 {\"user\":1,\"id\":3} \n\nHowever, this doesn't work:\nLog::channel('mychannel')->info(\n 'User {user} created Thing {id}',\n [\n 'user' => auth()->user()->id,\n 'id' => $thing->id\n ]\n);\n\n...as it produces:\n[2024-01-31 08:36:15] local.INFO: User {user} created Thing {id} {\"user\":1,\"id\":2} \n\nAny idea what I'm doing wrong?"} +{"id": "000542", "text": "I am using PHPStan within a Laravel 10 app and have set this to quite a high level (good practice I guess) - I have run into a few errors and i'm trying to figure out the best way to resolve them.\n ------ ------------------------------------------------------------------------------------------ \n Line app/Actions/Reports/SetReportStatusAction.php \n ------ ------------------------------------------------------------------------------------------ \n 17 Property App\\Models\\Report::$succeeded_at (Carbon\\Carbon|null) does not accept int|null. \n\ngetStripeObject();\n $report->status = $stripeObject->status;\n $report->succeeded_at = $stripeObject->succeeded_at ?? null;\n }\n}\n\nReport.php\n 'json',\n 'expires_at' => 'datetime',\n 'succeeded_at' => 'datetime',\n ];\n}\n\n\nThe data succeeded_at comes from a webhook event and will be in the format similar to this - a timestamp\n\"succeeded_at\": 1706786551\n\nExpected outcome -\nPHPStan to detect no errors with the two files in question\nActual outcome -\nRecieve a phpstan error 'Property App\\Models\\Report::$succeeded_at (Carbon\\Carbon|null) does not accept int|null.'"} +{"id": "000543", "text": "I have this function in a controller in laravel 10\npublic function change($id)\n{\n\n DB::beginTransaction();\n try{\n $current_period=Period::where('current',true)->first();\n $current_period->current=false;\n $current_period->save();\n $new_period=Period::findOrFail($id);\n $new_period->current=true;\n $new_period->save();\n\n //set person's memberSince and pay_mode to what is in the person_period pivot table\n $persons=Person::all();\n $persons->each(function ($person,$key){\n global $new_period;\n //dd($new_period->id);\n $pivot=$person->periods()->where('period_id',$new_period->id)->first();\n $person->memberSince=$pivot? $pivot['memberSince']:null;\n $person->pay_mode=$pivot?$pivot['pay_mode']:null;\n $$person->save();\n });\n DB::commit();\n $result=[];\n $ps=PersonResource::collection(Person::all());\n array_push($result, $ps);\n $pr=PeriodResource::collection(all());\n array_push($result,$pr);\n return $result;\n\n }catch (\\Exception $e) {\n DB::rollBack();\n throw $e;\n }\n}\n\nit returns an error\n\nmessage 'Attempt to read property \"id\" on null'\n\non the line just behind the presently commented line\n//dd($new_period->id)\n\nwhen I uncomment the line it returns the same error.\nWhen I place the uncommented line outside the each loop, it returns the correct id of $new_period confirming that outside the each loop the $new_period exists.\nWhat am I missing here?"} +{"id": "000544", "text": "I'm building an admin section for a Laravel 10 app using the boilerplate Breeze routes and templates (for now) and I have a group of prefixed routes that I want to return a 404 error for in the following cases:\n\nthere is NO signed in user\nif there IS a signed in user, they are not an admin\n\nIn other words, the ONLY time the prefixed routes should produce anything substantial is if the authenticated user is an admin. Otherwise, I want it to appear as the routes don't even exist.\nI am at my wits end trying to return the 404 error view and status code from the middleware. I have been tinkering with the code and reading documentation for HOURS now to no avail, and yet I get the feeling the answer is right in front of me but I'm missing something.\nThis is code I have in the web.php routes file:\nRoute::prefix('admin')->middleware(['admin'])->group(function() {\n Route::get('/', [AdminDashboardController::class, 'index'])->name('admin');\n});\n\nAnd here's what I have in the handle() function for the admin middleware:\npublic function handle(Request $request, Closure $next): Response\n{\n if ($request->user() === null || $request->user()->role !== 'admin') {\n return response('This page cannot be found', 404, []);\n }\n \n return $next($request);\n}\n\nThis code works just fine. BUT, I want to return the 404 page and HTTP 404 status code rather than a plain text page with the response message. If I try to return anything EXCEPT the above it just seems to skip right on to the next bit of middleware. As in, it attempts to render the Breeze dashboard component as if the user was signed in, even though the user isn't signed in.\nCould anybody please guide me on how I can return the 404 error page and status code in place of the current response above?"} +{"id": "000545", "text": "My application has Users and Groups. They are related with a pivot table. The pivot table has nullable \"joined\" and \"left\" datetimes.\nHow do I use Logical Grouping on a pivot table?\nI've tried:\n$this->belongsToMany(User::class)->wherePivot(function ($query) use ($time) {\n $query->where('joined', null)->orWhere('joined', '<=', $time);\n})\n// and \"left\"\n;\n\nand\n$this->belongsToMany(User::class)->where(function ($query) use ($time) {\n $query->wherePivot('joined', null)->orWherePivot('joined', '<=', $time);\n})\n// and \"left\"\n;\n\nbut neither work. The first complains about passing a function as an argument that's expected to be a string, and the second complains about wherePivot not being defined.\nAm I missing something obvious?"} +{"id": "000546", "text": "I have the following Models and relationships.\nApp\\Subscriber.php\nmorphMany(CustomFieldValue::class, 'customFieldValueable');\n }\n}\n\nApp\\CustomField.php\nhasMany(CustomFieldValue::class);\n }\n}\n\nApp\\CustomFieldValues.php\nbelongsTo(CustomField::class);\n }\n\n public function customFieldValueable()\n {\n // If you are using custom column names, specify them here. Otherwise, ensure they match the migration.\n return $this->morphTo(null, 'custom_field_valueable_type', 'custom_field_valueable_id');\n }\n}\n\nApp\\Campaign.php\n 'datetime',\n 'exported_at' => 'datetime'\n ];\n\n protected $fillable = [\n 'uuid',\n 'name',\n 'slug',\n 'client_id'\n ];\n \n public function customFieldValues()\n {\n return $this->morphMany(CustomFieldValue::class, 'customFieldValueable');\n }\n }\n\nApp\\Newsletter.php\nbelongsTo(Brand::class);\n }\n\n public function subscribers()\n {\n return $this\n ->belongsToMany(Subscriber::class)\n ->withPivot('active')\n ->withTimestamps();\n }\n\n public function customFieldValues()\n {\n return $this->morphMany(CustomFieldValue::class, 'customFieldValueable');\n }\n}\n\nMy migration to create the table to support this looks like this,\nid();\n $table->unsignedBigInteger('custom_field_id');\n $table->string('value');\n // Manually create the polymorphic columns\n $table->unsignedBigInteger('custom_field_valueable_id');\n $table->string('custom_field_valueable_type');\n // Manually specify a shorter index name\n $table->index(['custom_field_valueable_type', 'custom_field_valueable_id'], 'custom_field_valueable_index');\n $table->timestamps();\n\n $table->foreign('custom_field_id')->references('id')->on('custom_fields')->onDelete('cascade');\n });\n\n\n }\n\n /**\n * Reverse the migrations.\n */\n public function down(): void\n {\n Schema::dropIfExists('custom_field_values');\n }\n};\n\nHowever in the Nova interface when I try and view a subscriber I get the following error,\n\nSQLSTATE[42S22]: Column not found: 1054 Unknown column 'custom_field_values.customFieldValueable_type' in 'where clause' (Connection: mysql, SQL: select * from custom_field_values where custom_field_values.customFieldValueable_type = App\\Models\\Subscriber and custom_field_values.customFieldValueable_id = 1016 and custom_field_values.customFieldValueable_id is not null order by custom_field_values.id desc limit 6 offset 0)\n\nIf someone could explain to me where I have gone wrong that would great. I am very new to polymorphic relationships."} +{"id": "000547", "text": "I'm using Laravel Nova 4 (Laravel 10.43.0, PHP 8.2.8).\nWhen using soft deletion, I get the error Column not found: 1054 Unknown column '*.deleted_at', where * is the table of the entity for which I use it.\nFull SQL request in the exception:\nselect * from `users` where `id` = 1 and `users`.`deleted_at` is null limit 1\n\nIf I execute this query in the SQL console in PhpStorm, it will say that there is no such column, but the query itself will execute and do it correctly. An example of an implementation with a user.\n\n\nMigration:\n// ...\nSchema::create('users', function (Blueprint $table) {\n // ...\n $table->softDeletes();\n // ...\n});\n\nModel:\n// ...\nclass User extends Authenticatable\n{\n // ...\n use SoftDeletes;\n // ...\n}"} +{"id": "000548", "text": "why I can't bind or display data from the database going to input text field. I already add the rules just like what other said but still not luck. Please help, I stack on this part.\nBy the way I am using modal from question table CRUD.\nLIVEWIRE COMPONENT\nclass ViewPromo extends Component\n{\n\n public $questions;\n public Promo $promo;\n public Question $question;\n\n protected $rules = [\n 'question.question_title' => 'required',\n 'question.question_type' => 'required',\n ];\n\n public function editQuestion(Question $question) {\n $this->question = $question;\n }\n\n public function mount(Promo $promo) {\n $this->promo = $promo;\n $this->questions = $this->promo->questions()->get();\n }\n\n public function render()\n {\n return view('livewire.admin.promos.view-promo')->extends('layouts.app')->section('contents');\n }\n}\n\nLIVEWIRE BLADE FILE\n
\n Title\n \n
\n\n
\n Type\n \n \n \n \n
\n\nBUTTON FROM THE LIST\nid }})\" data-modal-target=\"edit-default-modal\" data-modal-toggle=\"edit-default-modal\" >Edit\n\nI want to display record from the database going to the input fields."} +{"id": "000549", "text": "I am new to Laravel and Vue 3.\nI have a create and update form for pages.\nThe created page works 100% with or without uploading images.\nI copied and pasted and changed the page object to the prop.page, updated the form route and a few other small changes but nothing else.\nWhen submitting the form without an image everything works fine, but if you submit a form, no input fields are submitted.\nI get no errors, from laravel, vue or console.log.\nHere is my page controller:\nauthorizeResource(Page::class, 'page');\n }\n\n /**\n * Display a listing of the resource.\n */\n public function index(): Response\n {\n $pages = Page::select('id', 'name', 'in_menu', 'position')->paginate(20);\n return Inertia::render('Admin/Pages/Index', compact('pages'));\n }\n\n /**\n * Show the form for creating a new resource.\n */\n public function create()\n {\n return Inertia::render('Admin/Pages/Create', ['status' => session('status')]);\n }\n\n /**\n * Store a newly created resource in storage.\n */\n public function store(StorePageRequest $request): RedirectResponse\n {\n $page = new Page();\n\n $newImageName = '';\n\n if($request->hasFile('header')){\n $image = $request->file('header');\n\n $originalName = pathinfo(str_replace(' ', '', $image->getClientOriginalName()), PATHINFO_FILENAME);\n $newImageName = time().'-'.$originalName.'.'.$image->guessClientExtension();\n\n $manager = new ImageManager(\n new Driver()\n );\n\n $image = $manager->read($image);\n\n $image->coverDown(1200, 675)->save(public_path('storage/uploads/page/images/full/'.$newImageName));\n $image->coverDown(600, 338)->save(public_path('storage/uploads/page/images/medium/'.$newImageName));\n $image->coverDown(100, 100)->save(public_path('storage/uploads/page/images/thumb/'.$newImageName));\n }\n\n $page->name = $request->name;\n $page->title = $request->title;\n $page->description = $request->description;\n $page->content = $request->content;\n $page->header = $newImageName;\n $page->in_menu = $request->in_menu;\n $page->position = $request->position;\n\n $page->save();\n\n return Redirect::route('admin.pages.index')->with('success', 'page created successfully');\n }\n\n /**\n * Display the specified resource.\n */\n public function show(Page $page)\n {\n \n }\n\n /**\n * Show the form for editing the specified resource.\n */\n public function edit(Page $page)\n {\n return Inertia::render('Admin/Pages/Edit', compact('page'));\n }\n\n /**\n * Update the specified resource in storage.\n */\n public function update(UpdatePageRequest $request, Page $page): RedirectResponse\n {\n dd([$request, $page]);\n $newImageName = $page->header;\n\n if($request->hasFile('header')){\n $image = $request->file('header');\n\n $originalName = pathinfo(str_replace(' ', '', $image->getClientOriginalName()), PATHINFO_FILENAME);\n $newImageName = time().'-'.$originalName.'.'.$image->guessClientExtension();\n\n $manager = new ImageManager(\n new Driver()\n );\n\n $image = $manager->read($image);\n\n $image->coverDown(1200, 675)->save(public_path('storage/uploads/page/images/full/'.$newImageName));\n $image->coverDown(600, 338)->save(public_path('storage/uploads/page/images/medium/'.$newImageName));\n $image->coverDown(100, 100)->save(public_path('storage/uploads/page/images/thumb/'.$newImageName));\n \n Storage::delete('/uploads/page/images/full/'.$page->header);\n Storage::delete('/uploads/page/images/medium/'.$page->header);\n Storage::delete('/uploads/page/images/thumb/'.$page->header);\n }\n\n $page->name = $request->name;\n $page->title = $request->title;\n $page->description = $request->description;\n $page->content = $request->content;\n $page->header = $newImageName;\n $page->in_menu = $request->in_menu;\n $page->position = $request->position;\n\n $page->save();\n\n return Redirect::route('admin.pages.index')->with('success', 'page update successfully');\n }\n\n /**\n * Remove the specified resource from storage.\n */\n public function destroy(Page $page)\n {\n //\n }\n}\n\nHere is my routes:\ngroup(function () {\n Route::controller(PageController::class)->group(function () {\n Route::get('/admin/pages', 'index')->name('admin.pages.index');\n\n Route::get('/admin/pages/create', 'create')->name('admin.pages.create');\n Route::post('/admin/pages/store', 'store')->name('admin.pages.store');\n\n Route::get('/admin/pages/{page}/edit', 'edit')->name('admin.pages.edit');\n Route::put('/admin/pages/{page}/update', 'update')->name('admin.pages.update');\n });\n});\n\nMy UpdatePageRequest:\n|string>\n */\n public function rules(): array\n {\n return [\n 'name' => ['required', 'string', 'max:50', 'min:4'],\n 'title' => ['required', 'string', 'max:150', 'min:4'],\n 'description' => ['required', 'string', 'max:200', 'min:20'],\n 'content' => ['required', 'string', 'min:4'],\n 'header' => ['sometimes', 'image'],\n 'in_menu' => ['required', 'numeric', Rule::in([0,1])],\n 'position' => ['required', 'numeric'],\n ];\n }\n}\n\nPage Policy:\nis_admin;\n }\n\n /**\n * Determine whether the user can view the model.\n */\n public function view(User $user, Page $page): bool\n {\n return (bool)$user->is_admin;\n }\n\n /**\n * Determine whether the user can create models.\n */\n public function create(User $user): bool\n {\n return (bool)$user->is_admin;\n }\n\n /**\n * Determine whether the user can update the model.\n */\n public function update(User $user, Page $page): bool\n {\n return true;\n // return (bool)$user->is_admin;\n }\n\n /**\n * Determine whether the user can delete the model.\n */\n public function delete(User $user, Page $page): bool\n {\n return (bool)$user->is_admin;\n }\n\n /**\n * Determine whether the user can restore the model.\n */\n public function restore(User $user, Page $page): bool\n {\n return (bool)$user->is_admin;\n }\n\n /**\n * Determine whether the user can permanently delete the model.\n */\n public function forceDelete(User $user, Page $page): bool\n {\n return (bool)$user->is_admin;\n }\n}\n\nAnd Finally my Vue:\n\n\n\n\nI have been trying everything on the internet I can find that is remotely plausable to this issue, 5 hours in total.\nI have used Chat GPT with very little help. It just tells me to make sure the routes and validations are correct.\nThe validation works because it is the same as the create, no differences.\nThe permissions from the policy look good and to make sure I even set them all to true to make sure it wasnt that.\nI did a console.log() of the form after it has been sent and it shows that the form.name, form.title etc was within the form.\nBut when it lands in the PageController@update it has nothing.\nHere is the dump when file is not uploaded:\narray:2 [\u25bc // app\\Http\\Controllers\\PageController.php:101\n 0 => \nIlluminate\\Http\n\\\nRequest {#37 \u25bc\n +attributes: \nSymfony\\Component\\HttpFoundation\n\\\nParameterBag {#42 \u25bc\n #parameters: []\n }\n +request: \nSymfony\\Component\\HttpFoundation\n\\\nInputBag {#41 \u25bc\n #parameters: array:6 [\u25bc\n \"name\" => \"About Us Page\"\n \"title\" => \"This is a title for about us page\"\n \"description\" => \"This is a little description about the about us page\"\n \"content\" => \"

sdasdasdasdasdasdasdasdasd

\"\n \"in_menu\" => 1\n \"position\" => 0\n ]\n }\n +query: \nSymfony\\Component\\HttpFoundation\n\\\nInputBag {#45 \u25b6}\n +server: \nSymfony\\Component\\HttpFoundation\n\\\nServerBag {#40 \u25b6}\n +files: \nSymfony\\Component\\HttpFoundation\n\\\nFileBag {#44 \u25b6}\n +cookies: \nSymfony\\Component\\HttpFoundation\n\\\nInputBag {#43 \u25b6}\n +headers: \nSymfony\\Component\\HttpFoundation\n\\\nHeaderBag {#39 \u25b6}\n #content: \"\n{\"name\":\"About Us Page\",\"title\":\"This is a title for about us page\",\"description\":\"This is a little description about the about us page\",\"content\":\"

sdasdasda\n \u25b6\n\"\n #languages: null\n #charsets: null\n #encodings: null\n #acceptableContentTypes: null\n #pathInfo: \"/admin/pages/5/update\"\n #requestUri: \"/admin/pages/5/update\"\n #baseUrl: \"\"\n #basePath: null\n #method: \"PUT\"\n #format: null\n #session: \nIlluminate\\Session\n\\\nStore {#322 \u25b6}\n #locale: null\n #defaultLocale: \"en\"\n -preferredFormat: null\n -isHostValid: true\n -isForwardedValid: true\n -isSafeContentPreferred: ? bool\n -trustedValuesCache: []\n -isIisRewrite: false\n #json: \nSymfony\\Component\\HttpFoundation\n\\\nInputBag {#41 \u25b6}\n #convertedFiles: null\n #userResolver: Closure($guard = null) {#285 \u25b6}\n #routeResolver: Closure() {#294 \u25b6}\n basePath: \"\"\n format: \"html\"\n }\n 1 => \nApp\\Models\n\\\nPage {#1324 \u25bc\n #connection: \"mysql\"\n #table: \"pages\"\n #primaryKey: \"id\"\n #keyType: \"int\"\n +incrementing: true\n #with: []\n #withCount: []\n +preventsLazyLoading: false\n #perPage: 15\n +exists: true\n +wasRecentlyCreated: false\n #escapeWhenCastingToString: false\n #attributes: array:10 [\u25b6]\n #original: array:10 [\u25b6]\n #changes: []\n #casts: []\n #classCastCache: []\n #attributeCastCache: []\n #dateFormat: null\n #appends: []\n #dispatchesEvents: []\n #observables: []\n #relations: []\n #touches: []\n +timestamps: true\n +usesUniqueIds: false\n #hidden: []\n #visible: []\n #fillable: array:7 [\u25bc\n 0 => \"name\"\n 1 => \"title\"\n 2 => \"description\"\n 3 => \"content\"\n 4 => \"header\"\n 5 => \"in_menu\"\n 6 => \"position\"\n ]\n #guarded: array:1 [\u25bc\n 0 => \"*\"\n ]\n }\n]\n\nThis is with the image uploaded:\narray:2 [\u25bc // app\\Http\\Controllers\\PageController.php:101\n 0 => \nIlluminate\\Http\n\\\nRequest {#37 \u25bc\n +attributes: \nSymfony\\Component\\HttpFoundation\n\\\nParameterBag {#42 \u25bc\n #parameters: []\n }\n +request: \nSymfony\\Component\\HttpFoundation\n\\\nInputBag {#38 \u25bc\n #parameters: []\n }\n +query: \nSymfony\\Component\\HttpFoundation\n\\\nInputBag {#45 \u25b6}\n +server: \nSymfony\\Component\\HttpFoundation\n\\\nServerBag {#40 \u25b6}\n +files: \nSymfony\\Component\\HttpFoundation\n\\\nFileBag {#44 \u25b6}\n +cookies: \nSymfony\\Component\\HttpFoundation\n\\\nInputBag {#43 \u25b6}\n +headers: \nSymfony\\Component\\HttpFoundation\n\\\nHeaderBag {#39 \u25b6}\n #content: null\n #languages: null\n #charsets: null\n #encodings: null\n #acceptableContentTypes: null\n #pathInfo: \"/admin/pages/5/update\"\n #requestUri: \"/admin/pages/5/update\"\n #baseUrl: \"\"\n #basePath: null\n #method: \"PUT\"\n #format: null\n #session: \nIlluminate\\Session\n\\\nStore {#322 \u25b6}\n #locale: null\n #defaultLocale: \"en\"\n -preferredFormat: null\n -isHostValid: true\n -isForwardedValid: true\n -isSafeContentPreferred: ? bool\n -trustedValuesCache: []\n -isIisRewrite: false\n #json: null\n #convertedFiles: null\n #userResolver: Closure($guard = null) {#285 \u25b6}\n #routeResolver: Closure() {#294 \u25b6}\n basePath: \"\"\n format: \"html\"\n }\n 1 => \nApp\\Models\n\\\nPage {#1324 \u25bc\n #connection: \"mysql\"\n #table: \"pages\"\n #primaryKey: \"id\"\n #keyType: \"int\"\n +incrementing: true\n #with: []\n #withCount: []\n +preventsLazyLoading: false\n #perPage: 15\n +exists: true\n +wasRecentlyCreated: false\n #escapeWhenCastingToString: false\n #attributes: array:10 [\u25bc\n \"id\" => 5\n \"name\" => \"About Us Page\"\n \"title\" => \"This is a title for about us page\"\n \"description\" => \"This is a little description about the about us page\"\n \"content\" => \"

sdasdasdasdasdasdasdasdasd

\"\n \"header\" => \"\"\n \"in_menu\" => 1\n \"position\" => 0\n \"created_at\" => \"2024-02-15 16:15:13\"\n \"updated_at\" => \"2024-02-15 18:56:48\"\n ]\n #original: array:10 [\u25b6]\n #changes: []\n #casts: []\n #classCastCache: []\n #attributeCastCache: []\n #dateFormat: null\n #appends: []\n #dispatchesEvents: []\n #observables: []\n #relations: []\n #touches: []\n +timestamps: true\n +usesUniqueIds: false\n #hidden: []\n #visible: []\n #fillable: array:7 [\u25bc\n 0 => \"name\"\n 1 => \"title\"\n 2 => \"description\"\n 3 => \"content\"\n 4 => \"header\"\n 5 => \"in_menu\"\n 6 => \"position\"\n ]\n #guarded: array:1 [\u25bc\n 0 => \"*\"\n ]\n }\n]\n\nAny help would be highly appreciated."} +{"id": "000550", "text": "I've got a long-running Laravel app with a traditional form in a blade view. Inside of this form I want to start to include a couple Livewire components that can refresh and submit data independent of the form. Is this possible?\nFor testing purposes I've created this simple component and nested it within the existing
on the page..\n
\n Time: {{ time() }}\n \n
\n\nBut, clicking the button doesn't refresh the component, and instead submits the form."} +{"id": "000551", "text": "I have a Laravel store project that I am confused to develop my products database. My store products are clothes, shoes, pants, laptops, phones, headphones, etc.\nHello friends, good time\nThis database is for the products of my store:\ndatabase photo\nBut as long as I have products like shoes and clothes it works correctly and optimally because they have many options, for example these are the options for product_id=3 which is a dress in the product_options table:\n\n\n\noption::1\noption::2\noption::3\n\n\n\n\ncolor_id:8\ncolor_id:8\ncolor_id:9\n\n\nsize_id: 3\nsize_id:5\nsize_id:5\n\n\nprice: 25\nprice: 20\nprice: 20\n\n\nstock: 5\nstock: 9\nstock: 8\n\n\n\nBut for products such as phones, laptops, etc., which have only one option and do not have color_id and size_id at all. I don't know what to do. please help. For example, these are the options for product_id=5, which is a laptop in the product_options table:\n\n\n\noption::1\n\n\n\n\ncolor_id:null\n\n\nsize_id: null\n\n\nprice: 50\n\n\nstock: 5"} +{"id": "000552", "text": "I was trying to print array inside a foreach loop in following way:\n@foreach($response['results'] as $row)\n \n \n @foreach($row['status'] as $r)\n Status- {{$r['groupName']}} // it shows above error on this line\n @endforeach\n \n {{$row['from'] - $row['sentAt']}}\n {{$row['to'] - $row['doneAt']}}\n \n@endforeach\n\nI have checked in every response and I have got status value on each response. Here is my array response:\n\"results\" => array:14 [\u25bc\n 0 => array:13 [\u25bc\n \"messageSegments\" => array:3 [\u25b6]\n \"sentAt\" => \"2024-02-23T04:41:01.881Z\"\n \"doneAt\" => \"2024-02-23T04:41:03.267Z\"\n \"mmsCount\" => 3\n \"mccMnc\" => \"310260\"\n \"price\" => array:2 [\u25b6]\n \"status\" => array:5 [\u25bc\n \"groupId\" => 3\n \"groupName\" => \"DELIVERED\"\n \"id\" => 5\n \"name\" => \"DELIVERED_TO_HANDSET\"\n \"description\" => \"Message delivered to handset\"\n ]\n \"error\" => array:5 [\u25bc\n \"groupId\" => 0\n \"groupName\" => \"OK\"\n \"id\" => 0\n \"name\" => \"NO_ERROR\"\n \"description\" => \"No Error\"\n ]\n \"applicationId\" => \"default\"\n ]\n 1 => array:13 [\u25b6]\n 2 => array:14 [\u25b6]\n 3 => array:14 [\u25b6]\n\nBut it shows following error :\n\nTrying to access array offset on value of type int\n\nWhat could be possible error behind that and how to solve it?"} +{"id": "000553", "text": "I have tried this in my migration but it does not work as expected.\n$table->string('key')->primary()->index();\nI also tried to use chatgpt but no solutions were correct. So I hit brick wall debugging, I know its something obvious I am missing."} +{"id": "000554", "text": "In table, I have a \"View Address\" button where when I click, it will show the address details. It works fine but however, since I am still new in Alpine.js, when I click the view address it shows all address details for all rows in the table. Below is my code:\n\n @foreach ($customer as $customers)\n \n \n {{ $customers->id }}\n {{ $customers->name }}\n {{ $customers->email }}\n \n
\n \n
\n \n \n id) }}\"\n class=\"text-indigo-600 hover:text-indigo-900\">\n \n \n \n \n \n \n \n \n @if ($customers->addresses)\n @php\n $addresses = is_string($customers->addresses) ? json_decode($customers->addresses, true) : $customers->addresses;\n $addresses = isset($addresses['addresses']) ? $addresses['addresses'] : $addresses;\n $num = 1;\n @endphp\n @if (!empty($addresses) && is_array($addresses))\n @foreach ($addresses as $address)\n
\n Address {{ $num }}
\n\n @if (array_key_exists('street1', $address))\n Street 1: {{ $address['street1'] }}
\n @endif\n @if (array_key_exists('street2', $address))\n Street 2: {{ $address['street2'] }}
\n @endif\n @if (array_key_exists('postcode', $address))\n Postcode: {{ $address['postcode'] }}
\n @endif\n @if (array_key_exists('city', $address))\n City: {{ $address['city'] }}
\n @endif\n @if (array_key_exists('state', $address))\n State: {{ $address['state'] }}
\n @endif\n @if (array_key_exists('country', $address))\n Country: {{ $address['country'] }}
\n @endif\n
\n @if (!$loop->last)\n
\n @endif\n @php\n $num++;\n @endphp\n @endforeach\n @endif\n @endif\n \n \n @endforeach\n\n\nI believe that since I declare , it will show all the address details for all rows. I tried to use based on $customer->id but somehow it does not work."} +{"id": "000555", "text": "I am building a website in Laravel and need to use a different domain for each language.\nCurrently I'm using mcamara/laravel-localization for the localization.\nFor example:\n\nEN => my-english-website.com\nNL => my-dutch-website.com\n\nIt seems like using different domains is not something that is supported by default.\nHas someone run into this issue and knows how to solve this?"} +{"id": "000556", "text": "I recently installed Laravel Framework 10.43.0 and then installed Laravel Breeze into that, which created a bunch of new files, including resources/views/layouts/navigation.blade.php. In that file there's this:\n\n
\n routeIs('dashboard')\">\n {{ __('Dashboard') }}\n \n
\n\nI can add new nav links by adding new x-nav-link elements but my question is... what are the valid : attributes for x-nav-link? There's :href and :active... anything else?\nI ask because I'd like to make a nav link appear conditionally based on whether or not you're an admin for the site and I'm curious if there's a built in way to do that with the x-nav-link element."} +{"id": "000557", "text": "I've been building a mail template based on the published files you get from php artisan vendor:publish --tag=laravel-notifications\nAnd I've successfully edited the message layout to include a powered-by message, however this piece does not use a variable.\nNow I'm stuck trying to add a unique unsubscribe url to just above (see picture below) this powered-by but I'm unable to pass it \"upwards\" to the vendor template message.blade.php\nThis is part is from my toMail() function from the Notification which is ultimately called by a $user->notify(new Reminder(..)) function\nreturn (new MailMessage())\n ->markdown('emails.reminder', ['unsubscribe_url' => 'custom_url_here'])\n\nThis is the editted message.blade.php file\n\n\n{{-- Body --}}\n{{ $slot }}\n\n{{-- Subcopy --}}\n@isset($subcopy)\n\n \n {{ $subcopy }}\n \n\n@endisset\n\n{{-- Footer --}}\n\n\n \n Unsubscribe here.\n \n\n \n\n \n \u00a9 {{ date('Y') }} {{ config('app.name') }}. @lang('All rights reserved.')\n \n\n\n\nI've tried to use functions like extends, yield, include, ... inside the emails.reminder blade template\nAlso tried different ways of adressing the vendor template to no luck.\n@extends('vendor.mail.html.message',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@yield('vendor.mail.html.message',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@include('vendor.mail.html.message',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@extends('x-mail::message',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@extends('x-slot:message',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@extends('x-mail::footer',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@extends('x-slot:footer',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@extends('mail::html.message',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\n@extends('mail::text.message',[\n 'unsubscribe_url' => $unsubscribe_url,\n])\n\nThe picture as mentioned above:"} +{"id": "000558", "text": "I developed a web application with React frontend and Laravel for stateless API.\nI would like to access to the frontend with the url \"http://localhost/my-application\" and the backend's API with the url \"http://localhost/api/my-application/\".\nI created the folder \"www/api/my-application\" and here I put a symlink of public index.php of my Laravel backend.\nThe problem is that to call the APIs, I have to put this url \"http://localhost/api/my-application/api/login\".\nAs you can see I have to repeat \"/api\" because Laravel APIs have that default url. Also, if I try to just access to just \"http://localhost/api/my-application/\" I get a page with 500 Server Error, and the log says\n\nERROR: View [welcome] not found. {\"exception\":\"[object] (InvalidArgumentException(code: 0): View [welcome] not found.\n\nThis is because it tries to access to the view. But I don't have any views, I would like that my Laravel project has just API stuff\nHow to clean the project so that I just have to access to \"http://localhost/api/my-application/\" to access to the APIs? So the \"login\" API, should be here: \"http://localhost/api/my-application/login\" and not here \"http://localhost/api/my-application/api/login\""} +{"id": "000559", "text": "I have dispatched a job and the the job is added to the database on the specified table, e.g. \"Jobs. The jobs are not being processed, even though the jobs are added to the database. No log neither failed_job.\nI have tried to create a new simple job with only Logger suspecting that I have errors in the handle code, still not working."} +{"id": "000560", "text": "I'm Using Laravel 10 I'm trying to get Random Record Without Duplication using inRandomOrder But It Giving Duplicate Record Some Time.\nhelp me to Solve It\nMy Controller Code\n$Question = Question::inRandomOrder()->limit(50)->get();"} +{"id": "000561", "text": "I have this route defined:\nRoute::match(['get', 'post'], '/{class}/{function}/', [OldBackendController::class, 'apiV1']);\n\nIf I do this request:\nPOST /api/v2_1/wishlist/archive\n\nLaravel enters int the OldBackendController, and the value of $class variable (in the controller), is this:\napi/v2_1/wishlist\n\nWhy? It shouldn't enter in the controller, cause the request does not contains 2 variables, but 4.\nThe strange thing is if in the controller I print $request->segments() value, I get all 4 segment:\nArray\n(\n [0] => api\n [1] => v2_1\n [2] => wishlist\n [3] => archive\n)"} +{"id": "000562", "text": "I am attempting to integrate Laravel 11 with React.js for data retrieval and transmission between the two. However, I cannot locate the routes/api.php file in the latest version of Laravel.\nI have searched for others experiencing the same issue, but I have yet to find any similar cases since Laravel 11 was only released a week ago."} +{"id": "000563", "text": "This is Laravel 11 + Fortify + Sanctum. I'm using Laravel for my API backend. Front-end is a 1st-party SPA.\nI was just testing my login endpoint (POST) using Thunder Client (XHR). When login call succeeds once, any subsequent calls to login endpoint issue a redirect to root url instead of returning a JSON response telling that the user is already authenticated. This means the caller will get a 405 (Method Not Allowed) as final response since there is no POST endpoint on root url.\nIn the past, we were able to control this buggy behavior by modifying RedirectIfAuthenticated middleware and redirecting only if this was not an XHR. In Laravel 11, they have moved these middleware into the framework itself. I also hear that they have fixed the problem itself, so there is no need to edit the middleware anymore, but I'm still seeing the bug.\nI also tried the new helper method of Laravel 11 to override redirection behavior like this (in bootstrap/app.php):\n$middleware->redirectUsersTo(function() {\n if(request()->wantsJson()) {\n return response()->json(['result' => 'success'], 200);\n } else {\n return '/home';\n }\n});\n\nBut this results in other unrelated problems. Am I missing something here?\nUpdate\nI have found that adding the following to RedirectIfAuthenticated middleware's handle function fixes the problem:\nif ($request->expectsJson())\n return response()->json(['message' => 'authenticated.'], 200);\n\nThis problem and this fix appears to be well known for many years. However, the fact that this middleware is now part of the framework in Laravel 11 and lives in vendor directory means I have to add this line manually to deployment and also take care of it not getting overwritten during updates. Can't see why Laravel haven't added this simple check over the past several major versions."} +{"id": "000564", "text": "I am using the Cknow/Money package to handle money values in models.\nI have this model event that validates a value before saving to prevent wrong data from being entered due to a common scenario:\nI have this laravel 10 saving event:\n static::saving(function (self $order) {\n if ($order->refunded_amount->greaterThan($order->total_amount)) {\n // Can't refund more than the order total.\n $order->refunded_amount = $order->total_amount;\n dd($order->refunded_amount); // This is still the original refunded_amount value instead of the new total_amount value\n }\n });\n\nIf I check the value of $order->refunded_amount after it is set to $order->total_amount, the value is still the wrong refunded amount as if the assignment didn't work somehow.\nOn the model, these are both cast:\nprotected $casts = [\n 'total_amount' => MoneyIntegerCast::class.':currency',\n 'refunded_amount' => MoneyIntegerCast::class.':currency',\n];\n\nI have a mutator which sets the value on the model:\npublic function setRefundedAmountAttribute(Money $value): void\n{\n if ($this->currency instanceof Currency && $value->getCurrency()->getCode() !== $this->currency->getCode()) {\n throw new InvalidMoneyException(\n \"{$value->getCurrency()->getCode()} does not equal existing currency {$this->currency->getCode()}\"\n );\n }\n\n $this->attributes['currency'] = $value->getCurrency();\n $this->attributes['refunded_amount'] = $value->getAmount();\n}\n\nI've tried removing the mutator to see if that is the problem, but that does not help.\nI can assign a value outside of the saving event and no issues:\n$someMoney = Money::USD(10.00);\n$order = new Order();\n$order->refunded_amount = $someMoney;\n$order->save();\n\nThis only happens with objects within the Eloquent models. If I were to have something like refund_count and total_count and both are integers, then the result would be that refund_count would now equal total_count as expected.\nTo be clear, this isn't just happening with these money objects. It is also happening with dates that Laravel has cast to Carbon objects.\nprotected $casts = ['renewed_at' => 'datetime'];\n\nstatic::saving(function (self $order) {\n $order->renewed_at = now();\n dd($order->renewed_at); // The renewed_at value will still be what it was originally set to instead of now()\n});\n\nIf I refresh the model after saving, it will show the updated timestamp values from the saving event, but it won't show them if I add a dd() right after the value is set within that event. This implies there is some sort of caching within the model going on after the accessor is first called.\nThis isn't the case with the money objects. They don't change within the saving event and they don't change after a refresh.\nIs there some sort of model level caching introduced in Laravel after v8? I don't remember this being an issue in earlier versions but it is happening in v10."} +{"id": "000565", "text": "I'm attempting to implement basic authentication functionality for testing purposes. I'm testing via Postman, so before logging in, I make a GET request to /sanctum/csrf-cookie. After that, I hit the /login endpoint and am able to receive data (if this part $request->session()->regenerate(); is commented out). However, if I try to access a protected route, I receive an error. Despite following the documentation closely, I'm encountering an issue when a user attempts to sign in. I receive an error when $request->session()->regenerate(); this is not commented out:\n\"message\": \"Session store not set on request.\"\n\nHere's my bootstrap/app.php:\nreturn Application::configure(basePath: dirname(__DIR__))\n ->withRouting(\n using: function () {\n Route::middleware('api')\n ->prefix('api')\n ->group(function () {\n // routes/api.php is not included here\n require base_path('routes/Api/V1/Auth/routes.php');\n });\n },\n web: __DIR__ . '/../routes/web.php',\n commands: __DIR__ . '/../routes/console.php',\n health: '/up',\n )\n ->withMiddleware(function (Middleware $middleware) {\n $middleware->statefulApi();\n })\n ->withExceptions(function (Exceptions $exceptions) {\n //\n })->create();\n\nHere's my login:\npublic function login(object $request)\n {\n // Validate user credentials\n if (!Auth::attempt($request->only(['username', 'password']))) {\n return $this->failedRequest('', 'Invalid email address or password', 400);\n }\n\n // Regenerate the user's session to prevent session fixation\n $request->session()->regenerate();\n\n // Sign in user\n Auth::login(Auth::user());\n\n // Return data\n return $this->successfullRequest(Auth::user(), 'User successfully logged in', 200);\n }\n\nMy routes:\nRoute::group(['prefix' => 'v1/auth'], function () {\n Route::post('register', [AuthController::class, 'register']);\n Route::post('login', [AuthController::class, 'login']);\n Route::post('logout', [AuthController::class, 'logout'])->middleware('auth:sanctum');\n});\n\nIf I remove this part, I basically receive Unauthenticated message when I try to hit .../logout\n\n$request->session()->regenerate();\n\nsanctum.php -> stateful:\n'stateful' => explode(',', env('SANCTUM_STATEFUL_DOMAINS', sprintf(\n '%s%s%s',\n 'localhost,localhost:3000,127.0.0.1,127.0.0.1:8000,::1',\n env('APP_URL') ? ',' . parse_url(env('APP_URL'), PHP_URL_HOST) : '',\n env('FRONTEND_URL') ? ',' . parse_url(env('FRONTEND_URL'), PHP_URL_HOST) : ''))),\n\nmy env file:\nAPP_URL=http://localhost:5000\nFRONTEND_URL=http://localhost:3000"} +{"id": "000566", "text": "I want to add service provider in laravel 11, but i am not sure how to add it using laravel 11. As previous version of laravel, it is added in config/app.php file, but in laravel 11 it needs to be added in packageServiceProvider file within providers folder.\nBelow is my code, please tell me if i am wrong somewhere..\n Anand\\LaravelPaytmWallet\\PaytmWalletServiceProvider::class,\n ];\n\n /**\n * Register services.\n */\n public function register(): void\n {\n //\n }\n\n /**\n * Bootstrap services.\n */\n public function boot(): void\n {\n //\n }\n}"} +{"id": "000567", "text": "Before Laravel 11, I used to bind listeners to events inside the App\\Providers\\EventServiceProvider provider class, for example:\n>\n */\n protected $listen = [\n MyEvent::class => [\n MyListener::class\n ]\n ];\n}\n\nIn Laravel 11, this binding isn't necessary at all since Laravel auto-discovery feature auto-discovers the listeners from the app/Listeners directory. How can I instruct Laravel to auto-discover listeners from a different directory such as app/Domain/Listeners ?"} +{"id": "000568", "text": "I have a route that serves as a webhook endpoint that gets called by a remote service, but the calls that the service makes to the webhook always fail.\nAfter some inspection of the service logs, I learned that the service is getting an HTTP error code 419.\nI used to add exceptions inside the $except property of the App\\Http\\Middleware\\VerifyCsrfToken middleware, However, I'm on Laravel 11 and I can't find this middleware anymore. What is the solution to this problem?"} +{"id": "000569", "text": "I'm on Laravel 10, and new to it too. Im trying to submit a form to a check() function that does some basic checks in db. Then it redirects the user to another route with $_GET data in the URL and to prevent the \"POSTDATA warning because the refresh function wants to resend previous POST form data.\" warning on the check() function page.\nhere is the friendly url with the $_GET data {from?} and {search_string?}\nRoute::group(['middleware' => ['web']], function () {\n Route::post('/check', [SampleController::class, 'check'])->name('sample.check');\n Route::get('/order/{from?}/{search_string?}', [SampleController::class, 'order'])->name('sample.order');\n}\n\nthis is my check() function\n public function check(Request $request) : RedirectResponse\n {\n\n $search_string = strtoupper(trim($request->search_string));\n $from = strtolower(trim($request->data_from));\n ...\n ...\n\n //if success redirect\n return redirect()->to(route('sample.order'), ['from' => $from, 'search_string' => $search_string]);\n }\n\nand here is the order function\npublic function order(Request $request) \n {\n $validator = Validator::make($request->route()->parameters(),[\n 'from' => [\n 'required',\n 'string',\n 'in:uk,us,id'\n ],\n 'search_string' => [\n 'required',\n 'string',\n 'max:255'\n ]\n ]);\n\n if ($validator->fails())\n {\n //abort(404, $validator->errors()); //this shows the json error message, how to make it show\n return view('samples::order')->withErrors($validator->errors());\n }\n\n \n return view('samples::order', ['from' => $request->from, 'search_string' => $request->search_string]);\n }\n\nquestion is:\n\nis this the \"laravel way\" to validate the parameters in the url?\nis this how to to prevent the \"POSTDATA warning\" without using ajax?"} +{"id": "000570", "text": "i want to make the project data saved in history page after a project finished and also for the soft delete function in the same page. but it's only shows the data with status done\nclass HistoryController extends Controller\n{\n /**\n * Display a listing of the resource.\n */\n public function index()\n {\n $projects = Project::where('status', 'Done') //select data with done status only\n ->orWhereNotNull('deleted_at') // shows soft deleted data\n ->orderBy('id', 'desc')\n ->paginate(5);\n\n // Pass the merged projects to your view\n return view('history', compact('projects'));\n }\n}\n\ni tried to have 2 variable then merges it. it does work but it makes the pagination doesn't work. i also try this but it result the same\npublic function index()\n {\n $projects = Project::withTrashed()\n ->where('status', 'Done')\n ->orderBy('id', 'desc')\n ->paginate(5);\n\n // Pass the merged projects to your view\n return view('history', compact('projects'));\n }\n\nis it possible to do this?"} +{"id": "000571", "text": "I'm creating a policy for model todo to authorize the user role and then set custom access for model functions like create(), update(), etc. As written in Laravel Documentation, we can create a policy for the todo model with php artisan make:policy todoPolicy --model=todo.\nWhen we use modelname + Policy for our policy name that is located in App\\Policies folder, Laravel automatically register them for the model, as we used todoPolicy for todo Model.\nAlso, to check if this Policy works or not, I had set return true; for the create() function in the todoPolicy file, and I'm calling $this->authorize('create,' todo::class); in add function of component for testing it.\nBut it always returns 403, not an authorized page. What's the problem with my code?\nComponent / Controller\nnamespace App\\Livewire\\Elements\\Todolist;\n\nuse Livewire\\Component;\nuse App\\Models\\todo;\n\nclass Todolist extends Component\n{\n public $description;\n\n public function done($id) {\n sleep(0.5);\n todo::where('id',$id)->first()->update([\n 'is_done' => true,\n ]);\n }\n\n public function restore($id) {\n sleep(0.5);\n todo::where('id',$id)->first()->update([\n 'is_done' => false,\n ]);\n }\n\n public function delete($id) {\n sleep(0.5);\n todo::where('id',$id)->first()->delete();\n }\n\n public function add(todo $todo) {\n $this->authorize('create',$todo);\n \n $this->validate([\n 'description' => ['required','max:128'],\n ]);\n sleep(0.5);\n todo::create([\n 'user_id' => session('user_id'),\n 'description' => $this->description,\n ]);\n $this->reset();\n }\n\n public function render()\n {\n return view('livewire.elements.todolist.todolist',[\n 'todos' => todo::orderBy('is_done','ASC')->orderBy('created_at','DESC')->get(),\n ]);\n }\n}\n\nTodoPolicy\nnamespace App\\Policies;\n\nuse App\\Models\\User;\nuse App\\Models\\todo;\nuse Illuminate\\Auth\\Access\\Response;\n\nclass todoPolicy\n{\n /**\n * Determine whether the user can view any models.\n */\n public function viewAny(User $user): bool\n {\n\n }\n\n /**\n * Determine whether the user can view the model.\n */\n public function view(User $user, todo $todo): bool\n {\n\n }\n\n /**\n * Determine whether the user can create models.\n */\n public function create(User $user): bool\n {\n return true;\n }\n\n /**\n * Determine whether the user can update the model.\n */\n public function update(User $user, todo $todo): bool\n {\n //\n }\n\n /**\n * Determine whether the user can delete the model.\n */\n public function delete(User $user, todo $todo): bool\n {\n //\n }\n\n /**\n * Determine whether the user can restore the model.\n */\n public function restore(User $user, todo $todo): bool\n {\n //\n }\n\n /**\n * Determine whether the user can permanently delete the model.\n */\n public function forceDelete(User $user, todo $todo): bool\n {\n //\n }\n}\n\nI have tried registering policies manually, but it didn't change anything."} +{"id": "000572", "text": "I am encountering the following error in my new installation of Laravel 11 and Vue.js when I try to submit a form, in this case, the Register form.\n\n419 | PAGE EXPIRED\n\nI have checked this question and yes, I do have @csrf in my register form and\n\n\nSteps to reproduce\n\nI installed a fresh instance of Laravel 11:\n\ncomposer create-project laravel/laravel test\ncd test/\n\n\nI added the laravel/ui package\n\ncomposer require laravel/ui\n\n\nI added the Vue.js and Auth UI\n\nphp artisan ui vue --auth\n\nIt gave me a warning:\n\nThe [Controller.php] file already exists. Do you want to replace it? (yes/no) [yes]\n\nI typed yes then [Enter].\n\nI installed the npm packages and initialized vite\n\nnpm install && npm run dev\n\n\nIniatilized the dev server and tried to register\n\nphp artisan serve\n\nEdit\nThere are additional steps that were done on the same repo. We work with mongodb hence we added the MongoDB configurations as well.\n\nInstalled the laravel/mongodb package\n\ncomposer require mongodb/laravel-mongodb\n\n\nAdded the configurations in config/database.php as below:\n\n'mongodb' => [\n 'driver' => 'mongodb',\n 'host' => env('DB_HOST', '127.0.0.1'),\n 'port' => env('DB_PORT', 27017),\n 'database' => env('DB_DATABASE', 'assets'),\n 'username' => env('DB_USERNAME', ''),\n 'password' => env('DB_PASSWORD', ''),\n 'options' => [\n 'database' => env('DB_AUTHENTICATION_DATABASE', 'admin'),\n ],\n],\n\n\nUpdated .env file as shown below:\n\nDB_CONNECTION=mongodb\nDB_HOST=127.0.0.1\nDB_PORT=27017\nDB_DATABASE=testdb\nDB_USERNAME=\nDB_PASSWORD=\n\nI am hoping there's someone with a solution.\nNB I tried to install Laravel 10 and do the same processes and it works perfectly. What has changed in Laravel 11? Any help is appreciated.\nEdit\nBelow is the contents of my bootstrap/app.php.\nuse Illuminate\\Foundation\\Application;\nuse Illuminate\\Foundation\\Configuration\\Exceptions;\nuse Illuminate\\Foundation\\Configuration\\Middleware;\n\nreturn Application::configure(basePath: dirname(__DIR__))\n ->withRouting(\n web: __DIR__.'/../routes/web.php',\n commands: __DIR__.'/../routes/console.php',\n health: '/up',\n )\n ->withMiddleware(function (Middleware $middleware) {\n //\n })\n ->withExceptions(function (Exceptions $exceptions) {\n //\n })->create();\n\nand register.blade.php\n\n @csrf\n
\n // Rest of code"} +{"id": "000573", "text": "table bayarlanjas\n| tglbayar | bayarlanja |user_id|\n| ---------- | ---------- |-------|\n| 2024-01-05 | 10 | 1 |\n| 2024-02-06 | 20 | 1 |\n| 2024-03-07 | 30 | 1 |\n| 2024-04-08 | 40 | 1 |\n\ntable zakats\n| tglawalhaul| tglakhirhaul |user_id|\n| ---------- | ------------ |-------|\n| 2024-01-01 | 2024-02-25 | 1 |\n| 2024-03-01 | 2024-04-25 | 1 |\n\n\nwhich I tried\nSELECT\n zakats.*,\n bayarlanjas.user_id,\n sum(bayarlanja) AS total_bayar\nFROM\n `zakats`\n LEFT JOIN `bayarlanjas` ON `bayarlanjas`.`user_id` = `zakats`.`user_id`\nWHERE\n `bayarlanjas`.`tglbayar` BETWEEN zakats.tglawalhaul\n AND zakats.tglakhirhaul\n AND `zakats`.`user_id` = 1\nGROUP BY\n `bayarlanjas`.`user_id`\n\n\nBut Error:\n\nSQLSTATE[42000]: Syntax error or access violation: 1055 Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated\n\nWhat I want\n\n| tglawalhaul| tglakhirhaul |total_bayar|user_id|\n| ---------- | ------------ |-----------|-------|\n| 2024-01-01 | 2024-02-25 | 30 | 1 |\n| 2024-03-01 | 2024-04-25 | 70 | 1 |"} +{"id": "000574", "text": "Laravel 11 does not come with a middleware file and the kernel.php file has been removed altogther. So, when I create a custom middleware, how do I register it?\nI do not know where to register middleware. Laravel 11 has been very confusing."} +{"id": "000575", "text": "i have just started working on laravel.I installed the laravel\nbreeze package stack blade for authentication .After updating the .env file and migrating the tables in to database .When i try to run the application it says get method not supported not only for register but also for login .i have tried the methods like php artisan route:clear , php artisan config:cache,php artisan optimize:clear.I even changed the post method in form and store function of register to get but it the error remains get method not supported.\nroute: auth.php\n group(function () {\n Route::get('register', [RegisteredUserController::class, 'create'])\n ->name('register');\n \n Route::post('register', [RegisteredUserController::class, 'store']);\n \n Route::get('login', [AuthenticatedSessionController::class, 'create'])\n ->name('login');\n \n Route::post('login', [AuthenticatedSessionController::class, 'store']);\n \n Route::get('forgot-password', [PasswordResetLinkController::class, 'create'])\n ->name('password.request');\n \n Route::post('forgot-password', [PasswordResetLinkController::class, 'store'])\n ->name('password.email');\n \n Route::get('reset-password/{token}', [NewPasswordController::class, 'create'])\n ->name('password.reset');\n \n Route::post('reset-password', [NewPasswordController::class, 'store'])\n ->name('password.store');\n });\n \n Route::middleware('auth')->group(function () {\n Route::get('verify-email', EmailVerificationPromptController::class)\n ->name('verification.notice');\n \n Route::get('verify-email/{id}/{hash}', VerifyEmailController::class)\n ->middleware(['signed', 'throttle:6,1'])\n ->name('verification.verify');\n \n Route::post('email/verification-notification', [EmailVerificationNotificationController::class, 'store'])\n ->middleware('throttle:6,1')\n ->name('verification.send');\n \n Route::get('confirm-password', [ConfirmablePasswordController::class, 'show'])\n ->name('password.confirm');\n \n Route::post('confirm-password', [ConfirmablePasswordController::class, 'store']);\n \n Route::put('password', [PasswordController::class, 'update'])->name('password.update');\n \n Route::post('logout', [AuthenticatedSessionController::class, 'destroy'])\n ->name('logout');\n });\n\nregister.blade.php\n\n \n @csrf\n\n \n
\n \n \n get('name')\" class=\"mt-2\" />\n
\n\n \n
\n \n \n get('email')\" class=\"mt-2\" />\n
\n\n \n
\n \n\n \n\n get('password')\" class=\"mt-2\" />\n
\n\n \n
\n \n\n \n\n get('password_confirmation')\" class=\"mt-2\" />\n
\n\n
\n \n {{ __('Already registered?') }}\n \n\n \n {{ __('Register') }}\n \n
\n \n
\n\nRegistered Controller\nvalidate([\n 'name' => ['required', 'string', 'max:255'],\n 'email' => ['required', 'string', 'lowercase', 'email', 'max:255', 'unique:'.User::class],\n 'password' => ['required', 'confirmed', Rules\\Password::defaults()],\n ]);\n\n $user = User::create([\n 'name' => $request->name,\n 'email' => $request->email,\n 'password' => Hash::make($request->password),\n ]);\n\n event(new Registered($user));\n\n Auth::login($user);\n\n return redirect(route('dashboard', absolute: false));\n }\n}\n\nError on browser\nSymfony\n\u2009\\\u2009\nComponent\n\u2009\\\u2009\nHttpKernel\n\u2009\\\u2009\nException\n\u2009\\\u2009\nMethodNotAllowedHttpException\nPHP 8.2.12\n11.3.1\nThe GET method is not supported for route register. Supported methods: POST.\n\nurl: http://localhost:8000/register"} +{"id": "000576", "text": "The upgrade notes for Laravel 11 say:\n\nThe float column type now creates a FLOAT equivalent column without total digits and places (digits after decimal point), but with an optional $precision specification to determine storage size as a 4-byte single-precision column or an 8-byte double-precision column. Therefore, you may remove the arguments for $total and $places and specify the optional $precision to your desired value and according to your database's documentation:\n$table->float('amount', precision: 53);\n\n\nHowever, the database documentation doesn't provide any explanation for what the precision argument might represent or why it defaults to 53. What effect will changing the value have on the resulting column?"} +{"id": "000577", "text": "My program is like this =\n$listreq2 = OutboundRequest::with('outboundmaterialrequest')\n->whereIn('id', $outboundIds)\n->whereHas('outboundmaterialrequest', function ($query) use ($ids) {\n $query->whereIn('id', $ids);\n})->get();\n\nWhy is my data filter on ->whereHas('outbound material request', function ($query) use ($ids) { doesn't work the contents of $ids for example are = 2,3\nbut all of the outboundmaterialrequest data appears"} +{"id": "000578", "text": "image\nThis is the admin side, when on the home page the theme of the web is white and it is fine there, I'm guessing it's because it is assigned to multiple css tag influencing the same part but I dont know here the files are or what even the name of the file is, how do I fix it\nThe only part I know that is attached to is a tag , it's empty like this but it adds the dropdown menu"} +{"id": "000579", "text": "I'm trying to block certain IP addresses from accessing my website using laravel 11.3.0, all the solutions i.m coming across suggest the method were you register the middleware in kernel.php but the version of laravel i'm using does not have kernel.php. So how do i go about this?\ni have tried registering it in app.php\nwithRouting(\n web: __DIR__.'/../routes/web.php',\n commands: __DIR__.'/../routes/console.php',\n health: '/up',\n )\n ->withMiddleware(function (Middleware $middleware) {\n $middleware->web(append: [\n \\App\\Http\\Middleware\\HandleInertiaRequests::class,\n \\Illuminate\\Http\\Middleware\\AddLinkHeadersForPreloadedAssets::class,\n ]);\n $middleware->alias([\n 'role' => \\Spatie\\Permission\\Middleware\\RoleMiddleware::class,\n 'permission' => \\Spatie\\Permission\\Middleware\\PermissionMiddleware::class,\n 'role_or_permission' => \\Spatie\\Permission\\Middleware\\RoleOrPermissionMiddleware::class,\n ]);\n\n \n \n return $middleware;\n })\n\n ->withMiddleware([\n \\App\\Http\\Middleware\\BlockIpMiddleware::class,\n ])\n ->withExceptions(function (Exceptions $exceptions) {\n //\n })\n ->create();\n\n\nERROR\nPHP Fatal error: Uncaught TypeError: Illuminate\\Foundation\\Configuration\\ApplicationBuilder::withMiddleware(): Argument #1 ($callback) must be of type ?callable, array given, called in C:\\xampp\\htdocs\\swamsite2\\Swarmsite\\Daniel\\telegram\\bootstrap\\app.php on line 29 and defined in C:\\xampp\\htdocs\\swamsite2\\Swarmsite\\Daniel\\telegram\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Configuration\\ApplicationBuilder.php:227\nStack trace:\n#0 C:\\xampp\\htdocs\\swamsite2\\Swarmsite\\Daniel\\telegram\\bootstrap\\app.php(29): Illuminate\\Foundation\\Configuration\\ApplicationBuilder->withMiddleware(Array)\n#1 C:\\xampp\\htdocs\\swamsite2\\Swarmsite\\Daniel\\telegram\\artisan(12): require_once('C:\\xampp\\htdocs...')\n#2 {main}\nthrown in C:\\xampp\\htdocs\\swamsite2\\Swarmsite\\Daniel\\telegram\\vendor\\laravel\\framework\\src\\Illuminate\\Foundation\\Configuration\\ApplicationBuilder.php on line 227"} +{"id": "000580", "text": "I am looking to implement a search function, that optionally includes the various fields that I have for my users to search on.\nMy code thusfar:\n $categories = $request->input('category'); // array(int)\n $textSearch = $request->input('text'); // string\n\n $query = Product::query();\n\n if(!is_null($categories)){\n //$query->selectSub();\n foreach($categories as $catId){\n $category = ShopCategory::find($catId);\n\n $query = $query->whereBelongsTo($category);//TODO\n }\n }\n if(!is_null($textSearch)){\n $query = $query->whereAny([\n 'name',\n 'description',\n 'shortDescription',\n ],\n 'like', '%'.$textSearch.'%');\n }\n\n $searchResults = $query->paginate(36);\n\nMy goal is to reach an SQL statement equivalent to:\nSELECT * FROM products WHERE (shop_category_id = ? OR shop_category_id = ?) AND (/* text search LIKE */)\nBut thusfar, I end up with more of a :\nSELECT * FROM products WHERE shop_category_id = ? AND shop_category_id = ? AND (/* text search LIKE */)\nWhere each of the category id's are ANDed as part of the overall query instead of ORed in a group.\nI am aware of or*() query methods, but it is unclear how to properly group them for the desired effect instead of just ORing them along with the text search.\nEdit: Based on the accepted answer, I landed on this:\n $mainQuery->where(function($query) use ($categories) {\n foreach ($categories as $catId) {\n $category = ShopCategory::find($catId);\n $query->orWhereBelongsTo($category);\n }\n });"} +{"id": "000581", "text": "I have the following routes in my web.php file\nRoute::resource('/tenants', TenantControler::class);\n\nThe command php artisan route:list shows all the resource routes with model name\n\ntenants/{tenant}\n\nI want a prefix, so I created a route group\nRoute::name(\"tenants.\")->prefix(\"tenants\")->group(function () {\n Route::resource('/', TenantControler::class);\n});\n\nnow if I try php artisan route:list the route list does not show model name in\nroutes but empty braces {}\n\ntenants/{}\n\nwhy its missing the model in the route?"} +{"id": "000582", "text": "When I tried to access the ID of a model (JenisSurvey) within the show method of the JSController, the ID is always null, even though the model is correctly retrieved. Interestingly, I have other controllers in my application where similar operations work perfectly fine. For example, if I use the User model in a different controller, I can access the ID without any issues.\nI've already checked the route configuration, model binding, auto-loaded files, controller logic, and the data with the searched ID exists in the database. Everything seems to be in order, and the logic mirrors that of other controllers where similar operations work.\nHere's a simplified version of the show method in the JSController:\npublic function show(JenisSurvey $jenisSurvey)\n{\n dd($jenisSurvey->id); // This always returns null\n\n if (!$jenisSurvey) {\n return response()->json(['error' => 'Data not found.'], 404);\n }\n return new JSResource($jenisSurvey);\n}\n\n\nInterestingly, I have other controllers in my application where similar operations work perfectly fine. For example, if I use the User model in a different controller (UserController), I can access the ID without any issues. Here's a simplified version of the show method in the UserController for comparison:\npublic function show(User $user)\n{\n dd($user->id); // This returns the correct ID\n return new UserResource($user);\n}\n\nAdditionally, when I switch the model and resource in the UserController to JenisSurvey, it works as expected.\n// UserController but with jenis survey model and resource\npublic function show(JenisSurvey $user)\n{\n dd($user->id); // This returns the correct ID\n return new JSResource($user);\n}\n\nHere are the routes:\nRoute::group(['namespace' => 'App\\Http\\Controllers', 'middleware' => 'auth:sanctum'],function () {\n Route::apiResource('users', UserController::class)->middleware(['auth','verified','role:admin']);\n Route::apiResource('js', JSController::class)->middleware(['auth','verified','role:admin|nasabah']);\n});\n\nHere are the screenshots below.\nJSController(Jenis Survey Controller):\n\nUserController using user's model and resource:\n\nUserController using JenisSurvey's model and resource:\n\nJSController (var dumping the whole model):\nphoto\nUserController (with Jenis Survey model and resource):\nphoto\nGiven that the issue seems to be specific to the JSController (Jenis Survey Controller) file, I suspect there might be something unique about its setup or environment. Any insights or suggestions on how to troubleshoot this issue further would be greatly appreciated."} +{"id": "000583", "text": "I'm having trouble with pagination in Laravel 10. I have generated 13 data from my database and I'm on page 1, so I'm expecting to see something like\n\n\"Showing 1 to 8 of 13 results\"\n\nbut instead, my code is displaying the message\n\n\"Showing 1 to 13 of 13 results\"\n\nwhat am I doing wrong? any help is appreciated.\nthis is my code:\n public function index(Request $request)\n {\n $client = new \\GuzzleHttp\\Client();\n\n $perPage = $request->query('per_page') ? : 8;\n $page = $request->query('page') ? : 1;\n\n $url = \"http://localhost/api/culinary?page=$page&per_page=$perPage\";\n\n $response = $client->request('GET', $url);\n\n $json = json_decode((string)$response->getBody(), true);\n\n $paginatedResult = new LengthAwarePaginator(\n $json, // the data to be paginated\n count($json), // total count of the data\n $perPage, // number of data shown per page\n $page, // current page number\n );\n\n $view_data = [\n 'title' => 'Browse Culinary',\n 'data' => $paginatedResult,\n ];\n return view('culinary.index', $view_data);\n }\n\nand I'm using this code in the index.blade.php to show the pagination part\n{{ $data->withQueryString()->withPath('/culinary')->links() }}"} +{"id": "000584", "text": "i use Laravel11 sanctum session (not token)\nLogin and logout (endpoints) are working correctly when called from postman\nbut when I call other api end points it gives me 401 unauthorized\nCould someone help me out please?aaa"} +{"id": "000585", "text": "I have watched several videos and read some related tutorials about customization of the register form of Laravel/JetStream registration system. All I found is something like this:\n//resources/views/auth/register.blade.php\n\n
\n \n \n \n \n \n \n \n \n
\n\nHowever I want to populate the genders dropdown from a database table, say \"genders\".\nFor this I have created a model Genders.\nAll I need to do is adding the following line\n//this line will be in a file but which one?\n$genders = Genders::all();\n\nThen I will be able to list the items in the view, as follows:\n
\n \n \n \n \n @foreach ($genders as $gender)\n \n @endforeach\n \n \n
\n\nI have added the following code into app/Providers/FortifyServiceProvider.php\n $genders]);\n });\n }\n\nI could not find where should I put the code $genders = Genders::all(); and pass $genders to view register.blade.php? This could simple for a few options but what if I want to list countries for example?"} +{"id": "000586", "text": "After run composer create-project laravel/laravel:^11.0 project-name then going to my project folder cd project-name after that i should run php artisan serve but i got an error at this step that seems vendor folder not found in my project\nPlease note: If i using laravel 8 it's work fine but i need to use laravel 11\nHelp me, Thank!\nMy php version is 8.2.10\nTry to using laravel 11 but can't, On the other hand laravel 8 works find.\nPLease note that I'am try to run composer install and composer update and remove composer cache\nFull error message: \"PHP Warning: require(/home/ahmedelmoslmany/Laravel-demo/example-app11/vendor/autoload.php): Failed to open stream: No such file or directory in /home/ahmedelmoslmany/Laravel-demo/example-app11/artisan on line 9\nPHP Fatal error: Uncaught Error: Failed opening required '/home/ahmedelmoslmany/Laravel-demo/example-app11/vendor/autoload.php' (include_path='.:/usr/share/php') in /home/ahmedelmoslmany/Laravel-demo/example-app11/artisan:9\nStack trace:\n#0 {main}\nthrown in /home/ahmedelmoslmany/Laravel-demo/example-app11/artisan on line 9\""} +{"id": "000587", "text": "I'm trying to work with Laravel 11, and deleted all migrations and created my own migrations, even deleted the database from mysql (which is a waste, I could use the old data with the new app in Laravel 11), but is not working at at all. It gives the error:\nSQLSTATE[42S02]: Base table or view not found: 1146 Table 'mydatabase.sessions' doesn't exist\n\nWhy that table still exists and its being called from code, even if I didn't create?"} +{"id": "000588", "text": "I have a User model and a Blocklist model, Now when I call User::all() I only want to return Users who are not related based on records in the block list table.\nFor instance:\nuser Table\n\n\n\nid\n...\n\n\n\n\n1\n...\n\n\n2\n...\n\n\n3\n...\n\n\n4\n...\n\n\n5\n...\n\n\n\nblocklist Table\n\n\n\nid\nuser_id\nblockable_id\nblockable_type\n\n\n\n\n1\n1\n2\n\\App\\Models\\User\n\n\n2\n1\n3\n\\App\\Models\\User\n\n\n2\n5\n1\n\\App\\Models\\User\n\n\n\nUser Model\n...\n\npublic function blocklist()\n{\n return $this->hasMany(Block::class);\n}\n\n\nBlocklist Model\n...\n\npublic function blockable(): MorphTo\n{\n return $this->morphTo();\n}\n\n\nUserController\n...\n$users = \\App\\Models\\User::whereDoesntHave('blocklist', function (Builder $query) {\n $query->where('user_id', auth('sanctum')->id());\n $query->orWhere(function (Builder $query) {\n $query->where('blockable_id', auth('sanctum')->id());\n $query->where('blockable_type', User::class);\n });\n})->all();\n\nThe idea is that, if user: 1 is making this request they should not see user: 2 and user: 3 whom they have blocked and they should also not see user: 5 who has also blocked them, but this is not how it works, whatever I do all the users are still returned."} +{"id": "000589", "text": "I was working on adding localization to my Laravel 11 project, and I created a middleware called SetLocale that consists of the codebase similar to below:\npublic function handle(Request $request, Closure $next): Response\n {\n App::setLocale(session()->get('locale'));\n return $next($request);\n }\n\nI added it to my bootstrap/app.php file like this:\nreturn Application::configure(basePath: dirname(__DIR__))\n ->withRouting(\n web: __DIR__ . '/../routes/web.php',\n commands: __DIR__ . '/../routes/console.php',\n health: '/up',\n )\n ->withMiddleware(function (Middleware $middleware) {\n $middleware->append(SetLocale::class);\n })\n ->withExceptions(function (Exceptions $exceptions) {\n //\n })->create();\n\nIn my LocalizationController, I set the session like this:\npublic function setLocalization(string $locale): RedirectResponse\n{\n session()->put('locale', $locale);\n App::setLocale($locale);\n return back();\n}\n\nMy route in web.php looks like this:\nRoute::get('/locale/{locale}', [LocalizationController::class, 'setLocalization'])->name('locale.switch');\n\nHere's what I tried:\n\nUsed the Session facade throughout the codebase, but it didn't work.\nUsed back()->with('locale', $locale); when returning in the setLocalization() function in the LocalizationController, but it didn't work.\nTried various changes, but I couldn't retrieve the 'locale' session data in my middleware.\n\nThe only way I got it to work was by wrapping the middleware around the route like this:\nRoute::prefix('localization')->middleware([SetLocale::class])->group(function() {\n Route::get('/locale/{locale}', [LocalizationController::class, 'setLocalization'])->name('locale.switch');\n});\n\nIs my use of global middleware incorrect, or did Laravel change how it handles sessions for global middleware?\nJust an FYI, Laravel has now moved its middleware to elsewhere. Now it's a clean file located in the bootstrap/app.php file.\nThanks for your help."} +{"id": "000590", "text": "I have tried to search for tutorial to move the public folder, but from all the guide it seems like the code is different than version 11. The folder structure I want to move will be like:\n\npublic (public folder is here)\nprogram (all the other files to be stored inside this folder)\n\nI have modified the public/index.php file to be:\nhandleRequest(Request::capture());\n\nHowever, when I try to run php artisan serve, I get the error\n Symfony\\Component\\Process\\Exception\\RuntimeException\n\n The provided cwd \"C:\\wamp64\\www\\my-project\\program\\public\" does not exist.\n\nWhat are the things needed to modify to get it works?"} +{"id": "000591", "text": "I am working on this beginner Laravel tutorial. In this example, why is the variable $incomingFields required?\nclass userController extends Controller\n{\npublic function register(Request $Request) {\n $incomingFields = $Request->validate({\n 'name' => 'required',\n 'email' ='required',\n 'password' => 'required'\n ]);\n return 'Hello from our controller';\n }\n}\n\nLink to tutorial\nI removed the variable and it works."} +{"id": "000592", "text": "I am following the Laravel Bootcamp tutorial, but I am stuck at notifications & events, as the notification is not sent.\nI have kept the default configuration in .env while, which is as follows:\nMAIL_MAILER=log\nMAIL_HOST=localhost\nMAIL_PORT=2525\nMAIL_USERNAME=null\nMAIL_PASSWORD=null\nMAIL_ENCRYPTION=null\nMAIL_FROM_ADDRESS=\"hello@example.com\"\nMAIL_FROM_NAME=\"${APP_NAME}\"\n\nIf I understand correctly, this should just write the e-mail at storage/logs/laravel.log.\nHowever, if I start the queue processing with php artisan queue:work, and I send a new chirp, I see the event getting triggered, but it fails after a minute and in the log I see\n[2024-05-12 10:01:39] local.ERROR: App\\Listeners\\SendChirpCreatedNotifications has been attempted too many times. {\"exception\":\"[object] (Illuminate\\\\Queue\\\\MaxAttemptsExceededException(code: 0): App\\\\Listeners\\\\SendChirpCreatedNotifications has been attempted too many times. at /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/MaxAttemptsExceededException.php:24)\n[stacktrace]\n#0 /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/Worker.php(785): Illuminate\\\\Queue\\\\MaxAttemptsExceededException::forJob()\n#1 /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/Worker.php(519): Illuminate\\\\Queue\\\\Worker->maxAttemptsExceededException()\n#2 /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/Worker.php(428): Illuminate\\\\Queue\\\\Worker->markJobAsFailedIfAlreadyExceedsMaxAttempts()\n#3 /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/Worker.php(389): Illuminate\\\\Queue\\\\Worker->process()\n#4 /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/Worker.php(176): Illuminate\\\\Queue\\\\Worker->runJob()\n#5 /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/Console/WorkCommand.php(139): Illuminate\\\\Queue\\\\Worker->daemon()\n#6 /root/chirper/vendor/laravel/framework/src/Illuminate/Queue/Console/WorkCommand.php(122): Illuminate\\\\Queue\\\\Console\\\\WorkCommand->runWorker()\n#7 /root/chirper/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): Illuminate\\\\Queue\\\\Console\\\\WorkCommand->handle()\n#8 /root/chirper/vendor/laravel/framework/src/Illuminate/Container/Util.php(41): Illuminate\\\\Container\\\\BoundMethod::Illuminate\\\\Container\\\\{closure}()\n#9 /root/chirper/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\\\Container\\\\Util::unwrapIfClosure()\n#10 /root/chirper/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(35): Illuminate\\\\Container\\\\BoundMethod::callBoundMethod()\n#11 /root/chirper/vendor/laravel/framework/src/Illuminate/Container/Container.php(662): Illuminate\\\\Container\\\\BoundMethod::call()\n#12 /root/chirper/vendor/laravel/framework/src/Illuminate/Console/Command.php(212): Illuminate\\\\Container\\\\Container->call()\n#13 /root/chirper/vendor/symfony/console/Command/Command.php(279): Illuminate\\\\Console\\\\Command->execute()\n#14 /root/chirper/vendor/laravel/framework/src/Illuminate/Console/Command.php(181): Symfony\\\\Component\\\\Console\\\\Command\\\\Command->run()\n#15 /root/chirper/vendor/symfony/console/Application.php(1049): Illuminate\\\\Console\\\\Command->run()\n#16 /root/chirper/vendor/symfony/console/Application.php(318): Symfony\\\\Component\\\\Console\\\\Application->doRunCommand()\n#17 /root/chirper/vendor/symfony/console/Application.php(169): Symfony\\\\Component\\\\Console\\\\Application->doRun()\n#18 /root/chirper/vendor/laravel/framework/src/Illuminate/Foundation/Console/Kernel.php(196): Symfony\\\\Component\\\\Console\\\\Application->run()\n#19 /root/chirper/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(1187): Illuminate\\\\Foundation\\\\Console\\\\Kernel->handle()\n#20 /root/chirper/artisan(13): Illuminate\\\\Foundation\\\\Application->handleCommand()\n#21 {main}\n\"}\n\nThis is what I see in the console:\n$ php artisan queue:work\n\n INFO Processing jobs from the [default] queue.\n\n 2024-05-12 14:33:07 App\\Listeners\\SendChirpCreatedNotifications ................................................................................. RUNNING\n 2024-05-12 14:34:10 App\\Listeners\\SendChirpCreatedNotifications .............................................................................. 1m 3s FAIL\nKilled\n\nI also tried other options, like mailpit or mailtrap, but still couldn't get it work; also, sometimes no error message would be logged in the laravel.log file.\nThe code was run from WSL using PHP 8.3"} +{"id": "000593", "text": "I created a new laravel 11 app running on php 8.3.7 and i want to exclude some paths from csrf validation\nafter carefully reading the documentation https://laravel.com/docs/11.x/middleware#registering-middleware\ni edited my app.php inside the bootstrap folder like this\nwithRouting(\n web: __DIR__ . '/../routes/web.php',\n commands: __DIR__ . '/../routes/console.php',\n health: '/up',\n )\n ->withMiddleware(function (Middleware $middleware) {\n $middleware->append([\n App\\Http\\Middleware\\VerifyInstallation::class,\n Illuminate\\Foundation\\Http\\Middleware\\CheckForMaintenanceMode::class,\n App\\Http\\Middleware\\TrimStrings::class,\n Illuminate\\Foundation\\Http\\Middleware\\ConvertEmptyStringsToNull::class,\n App\\Http\\Middleware\\TrustProxies::class\n ]);\n\n $middleware->web(append: [\n App\\Http\\Middleware\\SelectLanguage::class,\n App\\Http\\Middleware\\CorsMiddleware::class,\n App\\Http\\Middleware\\GameCdnMiddleware::class\n ]);\n\n $middleware->web(replace: [\n Illuminate\\Foundation\\Http\\Middleware\\VerifyCsrfToken::class =>App\\Http\\Middleware\\CustomVerifyCsrfToken::class,\n ]);\n\n $middleware->api(append: [\n App\\Http\\Middleware\\UseApiGuard::class,\n 'throttle:60,1',\n 'bindings'\n ]);\n\n\n })\n ->withExceptions(function (Exceptions $exceptions) {\n //\n })->create();\n\nand my custom middleware\nname('blog.')->group(function () {\n Route::resource('', BlogController::class);\n});\n\nThat's how i called the route. Everything was working well. The index, create, store method. But as you can see there is parameter passing inside the store route. and giving me 404 error if I call the route also. Please help if anybody knows about the problem.\nI have tried to invoke the show method from controller via resource route like Read More. but rising 404"} +{"id": "000595", "text": "PHP: 8.2\nLaravel Framework: 11\nMySQL\nWe have roughly the following code on our website to represent the overall problem.\nPost.php\nclass Post extends BaseModel\n{\n public function blocks(): MorphMany\n {\n return $this->morphMany(Block::class, 'blockable');\n }\n}\n\nBlock.php\nclass Block extends Model\n{\n public function blockable(): MorphTo\n {\n return $this->morphTo();\n }\n\n public function alt(): string\n {\n $alt = '';\n\n if ($this->caption) {\n $alt = $this->caption;\n } else if ($this->blockable->title) {\n $alt = 'Block from ' . $this->blockable->title;\n }\n\n return $alt;\n }\n}\n\nPostController.php\n$posts = Post::where('status_id', 1)\n ->with('blocks')\n ->get();\n\nThis code works correctly and returns the expected result. But by calling $this->blockable within the alt function, it produces extra (duplicate) MySQL queries to find the parent, even though it is already queried and present. Is there a way to let Laravel know that blockable has already been queried as the parent?\nOne quick solution we found was the following, but we feel there is a more eloquent way of achieving this.\npublic function alt(Post $blockable): string\n{\n $alt = '';\n\n if ($this->caption) {\n $alt = $this->caption;\n } else if ($blockable->title) {\n $alt = 'Block from ' . $blockable->title;\n }\n\n return $alt;\n}"} +{"id": "000596", "text": "We are recently migrating old data to our new database. We are first exporting all the tables from the old database and importing it to our local database. Then after customizing some fields we are re-exporting them and importing the tables and contents to our new database.\nHowever, after importing even though there are all the tables, (jobs, failed_jobs, job_batches) present in the database, laravel throws error when running jobs. It says, Base table or view 'new_db.jobs\" doesn't exist.\nWe cannot find any solution for this."} +{"id": "000597", "text": "I have a api project with Laravel 11, and i want to make a middleware for admin api.\nAdminMiddleware.php (custom middleware)\njson([\n 'status' => 'forbidden',\n 'message' => 'You need to log in first.'\n ], 403);\n }\n\n if ($user->authority !== 'admin') {\n return response()->json([\n 'status' => 'forbidden',\n 'message' => 'You are not an administrator.'\n ], 403);\n }\n\n return $next($request);\n\n }\n}\n\nbootstrap/app.php (new dir middleware config)\nwithRouting(\n web: __DIR__.'/../routes/web.php',\n api: __DIR__.'/../routes/api.php',\n commands: __DIR__.'/../routes/console.php',\n health: '/up',\n )\n ->withMiddleware(function (Middleware $middleware) {\n $middleware->alias([\n \"is_admin\" => AdminMiddleware::class,\n ]);\n })\n ->withExceptions(function (Exceptions $exceptions) {\n //\n })->create();\n\n\napi.php (api route)\nuser();\n})->middleware('auth:sanctum');\n\n\nRoute::prefix('/v1')->group(function () {\n Route::prefix('/admin')->group(function () {\n Route::prefix('/product')->group(function () {\n Route::get('/', [ProductController::class, \"index\"]);\n Route::post('/', [ProductController::class, \"store\"]);\n Route::get('/{id}', [ProductController::class, \"show\"]);\n Route::put('/{id}', [ProductController::class, \"update\"]);\n Route::delete('/{id}', [ProductController::class, \"destroy\"]);\n });\n Route::prefix('/user')->group(function () {\n Route::get('/', [UserController::class, \"index\"]);\n Route::get('/{id}', [UserController::class, \"show\"]);\n Route::post('/{id}', [UserController::class, \"store\"]);\n Route::put('/{id}', [UserController::class, \"update\"]);\n Route::delete('/{id}', [UserController::class, \"destroy\"]);\n });\n })->middleware(\"is_admin\");\n\n Route::prefix('/auth')->group(function () {\n Route::post('/signup', [UserController::class, \"signup\"]);\n Route::post('/signin', [UserController::class, \"signin\"]);\n Route::post('/signout', [UserController::class, \"signout\"]);\n });\n Route::prefix('/product')->group(function () {\n Route::post('/', [ProductController::class, \"product\"]);\n });\n});\n\nI've just try using aliases and api method in config. but in api method case, all api route affected by the middleware. I'd try to use Route::group(['middleware' => 'is_admin'] and directly in api route by use the AdminMiddleware, and call in the method with middleware(AdminMiddleware::class). ya it's doesn't work."} +{"id": "000598", "text": "I am currently trying to learn Eloquent ORM but first I am trying to pass some static data to my Controller.\nHere is the snippet of my Controller\nSCControler.php\n 'alexis',\n ]);\n }\n}\n\nin the code above, as you can see I am now trying to fetch the data through my model SCModel:all(); however, in the page of user_settings it says not defined altho the table now have data.\nSo I've tried debugging it if it can pass a static data, so as you can see in the return view.\nUnfortunately, it still shows an error but now it says, Call to undefined method ::displaySC()\nHere is my SCModel as well, I have made sure that the $table and the $fillable strings there inside the array matches in the table.\n\n */\n protected $fillable = [\n 'sc_id',\n 'sc_desc',\n ];\n}\n\nI will also be posting my routes in my web.php file. So whenever I try to redirect directly to the user_settings I can access it so I think this route is working.\nRoute::get('/user_settings', [SCModel::class, 'displaySC'])->name('auth.user_settings');\n\nI would greatly appreciate any help that you could provide to my mistakes.\njust to give context, this is just purely for studying so if think the database or data-related it's okay to delete/drop it to my db."} +{"id": "000599", "text": "I am using laravel 11 and trying to use spatie laravel permission.\nBut I got error:\n Target class [permission] does not exist.\n\nI read some article, it must be added in kernel.php this code:\n'role' => \\Spatie\\Permission\\Middleware\\RoleMiddleware::class,\n'permission' => \\Spatie\\Permission\\Middleware\\PermissionMiddleware::class,\n'role_or_permission' => \\Spatie\\Permission\\Middleware\\RoleOrPermissionMiddleware::class,\n\nBut there is no kernel.php in laravel 11?"} +{"id": "000600", "text": "I have this form that i'm building for a CMS website which is displaying some information stored in the database. I want a user to be able to select for example on the buttons section which buttons he wants to use for the post. with the solution i'm trying to use right now,when a user selects one check box , all 4 check boxes are being selected. How do i fix this?\n \n \n\n\n\n\n\n\n===============controller================\n\npublic function message(request $request)\n {\n try {\n $fields = $request->validate([\n\n 'title' => 'required',\n 'body' => 'required',\n 'chatID' => 'nullable',\n 'channelID' => 'nullable',\n 'buttons' => 'required|array',\n 'images' => 'nullable',\n 'date' => 'required',\n 'time' => 'required',\n\n ]);\n if ($request->hasFile('sentimages')) {\n $fields['images'] = Storage::disk('public')->put('sentimages', $request->images);\n }\n $fields['buttons'] = json_encode($fields['buttons']);\n message::create($fields);\n return redirect()->route('CMS')->with('successfully added');\n } catch (\\Exception $e) {\n\n return redirect()->back()->withErrors(['title'=>'Failed to upload title','body'=>'failed to insert body',\n 'chatID'=>'failed to insert chatID','channelID'=>'failed to insert channelID',\n 'buttons'=>'failed to insert buttons','images'=>'failed to insert images','date'=>'failed to add date', 'time'=>'failed to add time']);\n }\n }\n\n public function telegramCMS()\n {\n\n $uploads = uploadImage::all(['images']);\n $buttons = TelegramUpload::all(['buttonTitle']);\n $chatId = ChatId::all(['chatId']);\n $channelId = channelID::all(['channelID']);\n\n foreach ($uploads as $upload) {\n $upload->images = asset('storage/' . $upload->images);\n }\n\n return inertia::render('Telegram/TelegramBotCMS', [\n 'uploads' => $uploads,\n 'buttons' => $buttons,\n 'chatId' => $chatId,\n 'channelId' => $channelId\n ]);\n }\n========MODELS=======\n\n'array','buttons'=>'array','channelID'=>'array','images'=>'array'];\n}\n\n========= Migration=====================\n\nid();\n $table->string('title');\n $table->string('body');\n $table->json('buttons');\n $table->json('chatID')->nullable();\n $table->json('channelID')->nullable();\n $table->json('images')->nullable();\n $table->time('time');\n $table->time('date');\n\n $table->timestamps();\n });\n }\n\n /**\n * Reverse the migrations.\n */\n public function down(): void\n {\n Schema::dropIfExists('messages');\n }\n};\n \n\nwhen i click on one check box, all of them are being selected"} +{"id": "000601", "text": "I am a bit confused with the anonymous components.\nI am trying to pass a data in the component using prepending : before the attribute. However, it throws me an error\n\nUndefined variable $sizes\n\nBut as you can see, I have passed it in the component.\nCurrently, I am using it and wanted to pass the variable with a data in the sub anonymous component.\nSo the tree goes something like this. Assuming that the layout/x-layout blade file contains the header and the navbar for my website and I do have this user_preference blade file.\nuser_preferance.blade.php\n\n\n @if ($counter > 0)\n \n @else\n \n @endif\n\n\nunder the user_preference blade file, I can access the data from the controller. so if I say\n

{{ $sizes}} \n\nunder the user_preference blade file, it will display it.\nI created a separated component for the forms, so as you can see above I have conditional statement, wherein I have to show two different forms depending of the result of the conditional statement.\nNow, if it falls under the if statement, I want the\n{{ $sizes}} \n\nto be passed on the anonymous component called with_sizes_section. So as you can see in the code above, I have tried prepending with a : before the attribute name.\nI have also tried displaying the data from the user_preference using this one.\nuser_preferance.blade.php\n\n\n @if ($counter > 0)\n {{ $sizes }}\n @else\n \n @endif\n\n\nand it displayed the sizes, however I not know why can't I pass it on to another component.\nI saw this Verified Answer, however as you can see it is similar already with the code I have provided above.\nAny ideas or help are appreciated!"} +{"id": "000602", "text": "How to register a command via action without using the Console/Kernel.php file, because it does not exist in Laravel 11.\nI executed the command\ncomposer requires laravel/tinker\nphp artisan vendor:publish --provider=\"Laravel\\Tinker\\TinkerServiceProvider\"\n\nand in config/tinker.php:\n 'commands' => [\n \\App\\Actions\\SitemapGenerateAction::class,\n ],\n\nhere is the action\n use AsAction, AsCommand;\n\n public string $commandSignature = 'sitemap:generate';\n public string $commandDescription = 'Generate the sitemap';\n\n protected array $sitemapData;\n\n public function handle(): void\n {\n ...\n }\n\n public function asCommand(Command $command): void\n {\n ...\n }\n\n public function asController(): array\n {\n ...\n }"} +{"id": "000603", "text": "I have this Array of objects where I wanna sum up the quantity having the same barcode. Am using Laravel if you can give me a eloquent solution that will be great.\n[\n {\n \"id\": 1,\n \"barcode\": \"555\",\n \"description\": \"pencil\",\n \"quantity\": \"3\",\n },\n {\n \"id\": 2,\n \"barcode\": \"555\",\n \"description\": \"pencil\",\n \"quantity\": \"1\",\n },\n {\n \"id\": 3,\n \"barcode\": \"123\",\n \"description\": \"paper\",\n \"quantity\": \"1\",\n },\n {\n \"id\": 4,\n \"barcode\": \"123\",\n \"description\": \"paper\",\n \"quantity\": \"8\",\n },\n\n]\n\ndesired output\n[\n {\n \"id\": 1,\n \"barcode\": \"555\",\n \"description\": \"pencil\",\n \"qty\": \"4\",\n },\n {\n \"id\": 2,\n \"barcode\": \"123\",\n \"description\": \"pencil\",\n \"qty\": \"9\",\n }\n]\n\nthanks"} +{"id": "000604", "text": "I am new to Laravel and am struggling to implement a simple Policy :)\nJust for testing I did ->\n/app/Policies/ReleasePolicy.php:\ngroupBy('barcode')\n ->map(fn ($group, $key) => [\n 'id' => $group->first()['id'],\n 'barcode' => $group->first()['barcode'],\n 'description' => $group->first()['description'],\n 'qty' => $group->sum('qty'),\n ])\n ->values();\n\nThanks, and hopefully someone will be able to point out what am missing here."} +{"id": "000606", "text": "According to the cPanel documentation:\n\nTo call a UAPI function with an API token, run the following command from the command line:\ncurl -H'Authorization: cpanel username:APITOKEN' 'https://example.com:2083/execute/Module/function?parameter=value'\n\n\n\n\nItem\nDescription\nExample\n\n\n\n\nusername\nThe cPanel account's username.\nusername\n\n\nAPITOKEN\nThe API token.\nU7HMR63FGY292DQZ4H5BFH16JLYMO01M\n\n\nexample.com\nYour cPanel server's domain\nexample.com\n\n\nModule\nThe API module name.\nEmail\n\n\nfunction\nThe API function's name.\nadd_pop\n\n\nparameter\nThe function's input parameters.\nemail\n\n\nvalue\nThe value to assign to the input parameter.\n12345luggage\n\n\n\nFor example, your command may resemble the following example:\ncurl -H'Authorization: cpanel username:U7HMR63FHY282DQZ4H5BIH16JLYSO01M' 'https://example.com:2083/execute/Email/add_pop?email=newuser&password=12345luggage'\n\n\n\nsource: https://api.docs.cpanel.net/cpanel/tokens/\n\nI also created the following function in my controller:\npublic function test()\n{\n $response = Http::post('https://example.com:2083' ,[\n 'username' => 'holidays',\n 'APITOKEN' => 'XPA6V9NSZ5JGKXU5VE2X214U53WROFI0',\n 'Module' => 'Email',\n 'function' => 'add_pop',\n 'parameter' => 'email',\n 'value' => 'test',\n \n ]);\n return $response;\n // dd($response);\n}\n\nHow can I send a request correctly to cPanel to create a new email and get a successful message in Laravel 10?\nThank you"} +{"id": "000607", "text": "What I did before error:\n1. I created a new controller in Controllers with name MyPlaceController\n\nnamespace App\\Http\\Controllers;\n\nuse Illuminate\\Http\\Request;\n\nclass MyPlaceController extends Controller\n{\n public function index()\n {\n return 'this is my place';\n }\n}\n\n2.\nAfter step 1 i went to web.php and writed a code:\n\nuse Illuminate\\Support\\Facades\\Route;\n\nRoute::get('/', function () {\n return view('welcome');\n});\n\nRoute::get('/my_page', 'MyPlaceController@index');\n\n3. Starting website with command php artisan serve\nAfter this step I got an error with text Target class [MyPlaceController] does not exist.\nIlluminate\\Contracts\\Container\\BindingResolutionException\nTarget class [MyPlaceController] does not exist.\n4. I tried to create those files by myself, but it didn't worked (I also tried to create those files using cmd, but after creating there was no line with name protected $namespace 'App\\\\Http\\\\Controllers)\nIn the Laravel video course, the author used version 10. To solve this problem he went to the file called RouteServiceProvider.php and uncommented the line with the text: protected $namespace 'App\\\\Http\\\\Controllers' and after this action the site worked again, but in my 11 version these files are not present, and therefore I can not do the same as he did. I will also write that I have only a file named AppServiceProvider.php from all 5 files in Providers (but the author has 5 files like in the picture).\nAuthor's files:\n\nAppServiceProvider.php\nAuthServiceProvider.php\nBroadcastServiceProvider.php\nEventServiceProvider.php\nRouteServiceProvider.php\n\nAnd also I want to show my project structure:\nMy project structure\n(just in case, I apologize that my question may not have been posed very correctly)"} +{"id": "000608", "text": "I'm new in laravel and I confused to how can I register middleware without making it global. in the documentation I only found it to be registered like this (bootstrap\\app.php):\n\nthis resulting every routes I go was applied with AdminCheck middleware. meanwhile I only want login routes to be applied. also with the laravel breeze there is 'auth' middleware but I can't find the code definition anywhere in the folder structure. can someone explain? thx"} +{"id": "000609", "text": "I am developing a sort of multi-tenant application with Laravel 11 on PHP 8.3, where each \"user\" must belong to a \"company\".\nFor user management, I started from Laravel Breeze, and then modified the users table to include a company_id which references companies:\nSchema::create(\"users\", function (Blueprint $table) {\n ...\n $table->foreignUuid(\"company_id\")->constrained();\n ...\n});\n\nThen I defined the relation in the model:\npublic function company(): BelongsTo {\n return $this->belongsTo(Company::class);\n}\n\nThe problem is, if in the controller I try to access $request->user()->company it comes out as null; the same happens if I try to access it with $request->user()->company()->first().\nHowever, if I try with artisan tinker, I see it works properly:\n$ php artisan tinker\nPsy Shell v0.12.4 (PHP 8.3.8 \u2014 cli) by Justin Hileman\n> $user = User::where('email', 'test@example.com')->firstOrFail();\n[!] Aliasing 'User' to 'App\\Models\\User' for this Tinker session.\n= App\\Models\\User {#6215\n id: \"9c549870-805a-4555-bd72-86ba982a3c04\",\n company_id: \"9c54986f-8284-4da9-b826-c7a723de279b\",\n name: \"TEST test\",\n email: \"test@example.com\",\n email_verified_at: \"2024-06-20 08:18:06\",\n #password: \"$2y$12$wboWRmK/9B5uOT28.u/BO..gIlY0Sz75l7kQL8eIGBdRcxB5dGSn2\",\n #remember_token: null,\n created_at: \"2024-06-20 08:18:06\",\n updated_at: \"2024-06-20 08:18:06\",\n deleted_at: null,\n }\n\n> $user->company;\n= App\\Models\\Company {#6248\n id: \"9c54986f-8284-4da9-b826-c7a723de279b\",\n name: \"TEST Administration\",\n is_master: 1,\n fiscal_id: null,\n email: null,\n phone: null,\n mobile: null,\n address_line_1: null,\n address_line_2: null,\n address_post_code: null,\n address_city: null,\n address_province: null,\n created_at: \"2024-06-20 08:18:06\",\n updated_at: \"2024-06-20 08:18:06\",\n deleted_at: null,\n }\n\nI have found this previous question, where a comment suggests to preload the relationship by modifying the user model by adding:\nprotected $with = ['company'];\n\nI tried it, and it seems to work, however it does not seem right to have to eager load it each time, even when it is not needed (the majority of cases).\nHow can I have the relationships work when accessing the user with $request->user() without preloading them?\nI have seen that it does not work even if I try to refresh() the entity, or to manually load() the relation; however, $user->company_id is correctly set and $company = Company::find($user->company_id) works, but I really do not see why I cannot use the declared relationship."} +{"id": "000610", "text": "Related to my previous question, I found out that due to an error I made, Laravel generates a wrong SQL query:\nselect * from \"companies\" where \"companies\".\"id\" = '9c54986f-8284-4da9-b826-c7a723de279b' and \"companies\".\"deleted_at\" is null and \"company_id\" = '9c54986f-8284-4da9-b826-c7a723de279b'\n\nThe problem here is that company_id does not exist in companies; however, the query does not generate an error when run, it just returns no result.\nI suppose the problem here is that \"company_id\" is treated as a literal instead of a column reference; if I remove the quotes I get a proper error:\nError: in prepare, no such column: company_id (1)\n\nI also get a proper error if I add the table prefix to the column name:\nsqlite> select * from \"companies\" where \"companies\".\"id\" = '9c54986f-8284-4da9-b826-c7a723de279b' and \"companies\".\"deleted_at\" is null and \"companies\".\"compa\nny_id\" = '9c54986f-8284-4da9-b826-c7a723de279b';\nError: in prepare, no such column: companies.company_id (1)\n\nIs there a way to solve this problem by acting on Laravel's or SQLite's configuration? I cannot alter how the queries are generated, as they are generated by the framework itself.\nAlso, I am NOT asking why this specific query behave as it does, that was already clear to me.\nThe fragment and \"company_id\" = '9c54986f-8284-4da9-b826-c7a723de279b' is generated by a global scope implemented like this:\nabstract readonly class UnlessAuthorizedScope implements Scope {\n public function __construct(\n private string $modelField,\n protected ?string $authorizingPermission,\n private string $userField,\n ) {}\n\n public function apply(Builder $builder, Model $model): void {\n if (Auth::hasUser()) {\n $user = Auth::user();\n\n if (\n !$this->authorizingPermission\n || !$user?->can($this->authorizingPermission)\n ) {\n $builder->where(\n $this->modelField,\n $user?->{$this->userField}\n );\n }\n }\n }\n}\n\nwhich is then implemented in:\nreadonly class CurrentCompanyScope extends UnlessAuthorizedScope {\n public function __construct(?string $authorizingPermission = null, ?string $modelField = null) {\n parent::__construct(\n $modelField ?? \"company_id\",\n $authorizingPermission,\n \"company_id\"\n );\n }\n}\n\nand finally used as:\nclass Company extends Model {\n protected static function booted(): void {\n parent::booted();\n static::addGlobalScope(new CurrentCompanyScope(\n CompanyPermission::ViewAll->value,\n // the error was here, instead of specifying \"id\", I kept the default \"company_id\" value\n ));\n }\n}"} +{"id": "000611", "text": "I am running a Laravel 10 with passport 11 hyn multitenant application. It's a legacy application that is a few years old and has worked very well until I upgraded my Laravel from 9 to 10 and passport from 10 to 11. I when I try to create a tenancy, everything works until the \"$company = Tenant::create($request);\" which also works quite well up to a point. It creates an entry into the tency.hostnames and tenancy.websites tables respectively and also creates a custom database for the new tenancy. How ever, It attempts to recreate all the system tables that resides in the tenant database. These migration files are used by the entire app and are located in migrations folder whiles the others that make up each tenncies's database are in migrations/tenant folder. I do'nt really undersatand why creating a new tenancy is trying to recreate the tables in the tenancy database which obviously already exists. I get this error when this happens\n\nSQLSTATE[42S01]: Base table or view already exists: 1050 Table 'websites' already exists (Connection: system, SQL: create table websites (id bigint unsigned not null auto_increment primary key, uuid varchar(191) not null, created_at timestamp null, updated_at timestamp null, deleted_at timestamp null) default character set utf8mb4 collate 'utf8mb4_unicode_ci')\n\nthis is my register function\npublic function register(Request $request)\n {\n $facility_name = Hostname::where('tenant_facility_name', $request->tenant_facility_name)->first();\n $checkEmail = Hostname::where('email', $request->email)-`>`first();\n $fqdn = Hostname::where('subdomain', $request->fqdn)->first();\n\n if($facility_name){\n return response(['message' => 'A facility with this name already exist'], 409);\n }\n if ($checkEmail) {\n return response(['message' => 'A facility with this email already exist'], 409);\n }\n if ($fqdn) {\n return response(['message' => 'A facility with this name already exist'], 409);\n }\n\n // Validate the incoming request\n $this->validator($request->all())->validate();\n\n try {\n // create Tenant Account\n $company = Tenant::create($request);\n } catch(\\Exception $e) {\n return response($e->getMessage(), 405);\n }\n\n event(new Registered($user = $this->create($request->all())));\n // set trial period for new account without actual subscription for 30 days\n // $host = HostnameModel::where('fqdn', $company->hostname->fqdn)->first();\n // $host->trial_ends_at = now()->addDays(30);\n // $host->save();\n $this->createTrialPeriod($request->email);\n\n // Function to send email\n $name = $request->othernames . ' ' . $request->surname;\n $email = $request->email;\n $facilityName = $request->tenant_facility_name;\n\n // Send email to new account admin\n try {\n Mail::to($email)->send(new TenantAccountCreation($name, $email, $facilityName));\n } catch(\\Exception $e) {\n // sending 200 so that the registration continues without queing email to new account user\n return response($e->getMessage(), 200);\n }\n\n return response()->json(['message' => 'Account Has Been Created Successfully'], 200);\n }\n\nand this is my Tenant.php file\nwebsite = $website ?? $sub->website;\n $this->hostname = $hostname ?? $sub->websites->hostnames->first();\n }\n\n public function delete()\n {\n app(HostnameRepository::class)->delete($this->hostname, true);\n app(WebsiteRepository::class)->delete($this->website, true);\n }\n\n\n\n public static function create($request): Tenant\n {\n // Create New Website\n $website = new Website;\n\n // Attached the fqdn to a random string of 5\n $website->uuid = $request->fqdn.'_'.Str::random(5);\n\n app(WebsiteRepository::class)->create($website);\n\n // associate the website with a hostname\n $hostname = new Hostname;\n $hostname->subdomain = $request->fqdn;\n $hostname->email = $request->email;\n $hostname->currency = $request->currency;\n\n\n // Add the facility name to hostname table\n $hostname->tenant_facility_name = $request->tenant_facility_name;\n\n // merge\n // $request->merge(['fqdn' => $request->fqdn . '.' . env('APP_URL_BASE')]);\n $fqdn = $request->fqdn . '.' .config('services.environment');\n\n $hostname->fqdn = $fqdn;\n // $hostname->fqdn = $request->fqdn;\n\n app(HostnameRepository::class)->attach($hostname, $website);\n\n // make hostname current\n app(Environment::class)->tenant($website);\n\n Artisan::call('passport:install');\n\n return new Tenant($website, $hostname);\n }\n\n\n public static function tenantExists($name)\n {\n return Hostname::where('fqdn', $name)->exists();\n }\n\n}\n\nand this is my database.php file\n env('DB_CONNECTION', 'mysql'),\n\n /*\n |--------------------------------------------------------------------------\n | Database Connections\n |--------------------------------------------------------------------------\n |\n | Here are each of the database connections setup for your application.\n | Of course, examples of configuring each database platform that is\n | supported by Laravel is shown below to make development simple.\n |\n |\n | All database work in Laravel is done through the PHP PDO facilities\n | so make sure you have the driver for your particular database of\n | choice installed on your machine before you begin development.\n |\n */\n\n 'connections' => [\n\n 'system' => [\n 'driver' => 'mysql',\n 'host' => env('TENANCY_HOST', '127.0.0.1'),\n 'port' => env('TENANCY_PORT', '3306'),\n 'database' => env('TENANCY_DATABASE', 'tenancy'),\n 'username' => env('TENANCY_USERNAME', 'User1'),\n 'password' => env('TENANCY_PASSWORD', 'mypassword'),\n 'unix_socket' => env('DB_SOCKET', ''),\n 'charset' => 'utf8mb4',\n 'collation' => 'utf8mb4_unicode_ci',\n 'prefix' => '',\n 'strict' => true,\n 'engine' => null,\n ],\n\n 'sqlite' => [\n 'driver' => 'sqlite',\n 'url' => env('DATABASE_URL'),\n 'database' => env('DB_DATABASE', database_path('database.sqlite')),\n 'prefix' => '',\n 'foreign_key_constraints' => env('DB_FOREIGN_KEYS', true),\n ],\n\n 'mysql' => [\n 'driver' => 'mysql',\n 'url' => env('DATABASE_URL'),\n 'host' => env('DB_HOST', '127.0.0.1'),\n 'port' => env('DB_PORT', '3306'),\n 'database' => env('DB_DATABASE', 'forge'),\n 'username' => env('DB_USERNAME', 'forge'),\n 'password' => env('DB_PASSWORD', ''),\n 'unix_socket' => env('DB_SOCKET', ''),\n 'charset' => 'utf8mb4',\n 'collation' => 'utf8mb4_unicode_ci',\n 'prefix' => '',\n 'prefix_indexes' => true,\n 'strict' => true,\n 'engine' => null,\n 'options' => extension_loaded('pdo_mysql') ? array_filter([\n PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),\n ]) : [],\n ],\n\n 'pgsql' => [\n 'driver' => 'pgsql',\n 'url' => env('DATABASE_URL'),\n 'host' => env('DB_HOST', '127.0.0.1'),\n 'port' => env('DB_PORT', '5432'),\n 'database' => env('DB_DATABASE', 'forge'),\n 'username' => env('DB_USERNAME', 'forge'),\n 'password' => env('DB_PASSWORD', ''),\n 'charset' => 'utf8',\n 'prefix' => '',\n 'prefix_indexes' => true,\n 'schema' => 'public',\n 'sslmode' => 'prefer',\n ],\n\n 'sqlsrv' => [\n 'driver' => 'sqlsrv',\n 'url' => env('DATABASE_URL'),\n 'host' => env('DB_HOST', 'localhost'),\n 'port' => env('DB_PORT', '1433'),\n 'database' => env('DB_DATABASE', 'forge'),\n 'username' => env('DB_USERNAME', 'forge'),\n 'password' => env('DB_PASSWORD', ''),\n 'charset' => 'utf8',\n 'prefix' => '',\n 'prefix_indexes' => true,\n ],\n\n ],\n\n /*\n |--------------------------------------------------------------------------\n | Migration Repository Table\n |--------------------------------------------------------------------------\n |\n | This table keeps track of all the migrations that have already run for\n | your application. Using this information, we can determine which of\n | the migrations on disk haven't actually been run in the database.\n |\n */\n\n 'migrations' => 'migrations',\n\n /*\n |--------------------------------------------------------------------------\n | Redis Databases\n |--------------------------------------------------------------------------\n |\n | Redis is an open source, fast, and advanced key-value store that also\n | provides a richer body of commands than a typical key-value system\n | such as APC or Memcached. Laravel makes it easy to dig right in.\n |\n */\n\n 'redis' => [\n\n 'client' => env('REDIS_CLIENT', 'predis'),\n\n 'options' => [\n 'cluster' => env('REDIS_CLUSTER', 'predis'),\n 'prefix' => Str::slug(env('APP_NAME', 'laravel'), '_').'_database_',\n ],\n\n 'default' => [\n 'host' => env('REDIS_HOST', '127.0.0.1'),\n 'password' => env('REDIS_PASSWORD', null),\n 'port' => env('REDIS_PORT', 6379),\n 'database' => env('REDIS_DB', 0),\n ],\n\n 'cache' => [\n 'host' => env('REDIS_HOST', '127.0.0.1'),\n 'password' => env('REDIS_PASSWORD', null),\n 'port' => env('REDIS_PORT', 6379),\n 'database' => env('REDIS_CACHE_DB', 1),\n ],\n\n ],\n\n];\n\n\nupdating the create function in my Tenant.php to:\n public static function create($request): Tenant\n {\n try {\n // Create New Website\n \\Log::info('creating new website info');\n $website = new Website;\n\n // Attached the fqdn to a random string of 5\n $website->uuid = $request->fqdn . '_' . Str::random(5);\n\n app(WebsiteRepository::class)->create($website);\n \\Log::info('website created');\n\n // Associate the website with a hostname\n $hostname = new Hostname;\n $hostname->subdomain = $request->fqdn;\n $hostname->email = $request->email;\n $hostname->currency = $request->currency;\n $hostname->tenant_facility_name = $request->tenant_facility_name;\n\n $fqdn = $request->fqdn . '.' . config('services.environment');\n $hostname->fqdn = $fqdn;\n\n app(HostnameRepository::class)->attach($hostname, $website);\n \\Log::info('creating new hostname');\n\n // Make hostname current\n app(Environment::class)->tenant($website);\n \\Log::info('make tenancy current');\n\n // Run Passport install\n \\Log::info('starting passport install');\n Artisan::call('passport:install');\n \\Log::info('passport installed');\n\n return new Tenant($website, $hostname);\n\n } catch (\\Exception $e) {\n \\Log::error('Error creating tenant: ' . $e->getMessage());\n throw $e;\n }\n }\n\nthis is what is logged in the laravel log file\n\n[2024-06-21 00:16:01] local.INFO: creating new website info\n[2024-06-21 00:17:15] local.INFO: website created\n[2024-06-21 00:17:15] local.INFO: creating new hostname\n[2024-06-21 00:17:15] local.INFO: make tenancy current\n[2024-06-21 00:17:15] local.INFO: starting passport install\n[2024-06-21 00:17:16] local.ERROR: Error creating tenant: SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'websites' already exists (Connection: system, SQL: create table websit\n\nwhich clearly proves that the line\nArtisan::call('passport:install')\nis responsible for the error.\nQuestion is : Why is this,\nArtisan::call('passport:install')\ncommand trying to run all migration files in the application all over again?"} +{"id": "000612", "text": "In my Controller that extends the Illuminate\\Routing\\Controller, i am trying to implement some middlware validations using the new HasMiddleware.\nThis is the code that i am trying to run:\n ['index', 'show']]),\n new Middleware('permission:write.' . self::$readPermission, ['only' => ['edit', 'store', 'update', 'destroy', 'restore']])\n ], self::$customPermissionChecks);\n }\n}\n\nHowever Illuminate\\Routing\\Controller's middlware() is confliting with the HasMiddleware's static method giving me the following error:\nCannot make non static method Illuminate\\Routing\\Controller::middleware() static in class App\\Http\\Controllers\\BaseController"} +{"id": "000613", "text": "I'd like to insert the user-id of the current user into a table in column \"user_id\". The field is a relation to the user table.\nMigration / database schema\n Schema::create('pdlocations', function (Blueprint $table) {\n $table->id();\n $table->timestamps();\n $table->decimal('lon', 10, 7);\n $table->decimal('lat', 10, 7);\n $table->string('map'); \n $table->unsignedBigInteger('user_id'); \n $table->foreign('user_id')\n ->references('id')\n ->on('users');\n });\n\nIn the controler (PdlocationController.php)\n public function store(PdlocationStoreRequest $request): RedirectResponse\n {\n $request->merge([\n 'user_id' => auth()->user()->id,\n// 'user_id' => auth()->user(),\n ]);\n \n $this->validate($request, [\n 'user_id' => 'required|exists:users,id',\n ]);\n\n Pdlocation::create($request->validated());\n \n return redirect()->route('admin.pdlocation.index')\n ->with('success', 'Pdlocation created successfully.');\n }\n\nIf merging into the request the current userID auth()->user()->id I get the following error message:\nSQLSTATE[HY000]: General error: 1364 Field 'user_id' doesn't have a default value\n\ninsert into\n `pdlocations` (`map`, `lon`, `lat`, `updated_at`, `created_at`)\nvalues\n (test, 66, 55, 2024 -06 -21 18: 42: 57, 2024 -06 -21 18: 42: 57)\n\nIf merging into the request instead the user object auth()->user() the validator says The selected user id is invalid.\nAny idea or suggestion what I'm missing?"} +{"id": "000614", "text": "Struggling to get the correct results for a belongsToMany that references the same model.\nRelationship definition in the Status model:\npublic function represents(): BelongsToMany {\n return $this->belongsToMany(Status::class, 'status_represents', 'parent_id', 'child_id');\n}\n\nparent_id and child_id both reference the status table id.\nData in the status_represents table:\n\nI want to get all the Status records with a parent_id = 7\nQuery I tried:\n$activeStatus = Status::whereHas('represents', function (Builder $query){\n $query->where('status_represents.parent_id', 7);\n})->get();\n\nThat returns the status record with id = 7.\nNot the status records with parent_id = 7. Expecting records with id: 1,2,3.\nIn SQL this works:\nselect * \nfrom statuses as s\ninner join status_represents as sr on sr.child_id = s.id\nwhere sr.parent_id = 7"} +{"id": "000615", "text": "I am trying to understad the keypoint output of the yolov7, but I didn't find enough information about that.\nI have the following output:\narray([ 0, 0, 430.44, 476.19, 243.75, 840, 0.94348, 402.75, 128.5, 0.99902, 417.5, 114.25, 0.99658, 385.5, 115, 0.99609, 437.75, 125.5, 0.89209, 366.75, 128, 0.66406, 471, 229.62,\n 0.97754, 346.75, 224.88, 0.97705, 526, 322.75, 0.95654, 388.5, 340.75, 0.95898, 424.5, 314.75, 0.94873, 483.5, 335.5, 0.9502, 465.5, 457.75, 0.99219, 381.5, 456.25, 0.99219, 451.5, 649,\n 0.98584, 379.25, 649.5, 0.98633, 446.5, 818, 0.92285, 366, 829.5, 0.9248])\n\nthe paper https://arxiv.org/pdf/2204.06806.pdf tells \"So, in total there are 51 elements for 17 keypoints associated with an anchor. \" but the length is 58.\nthere are 18 numbers that probably are confidences of a keypoint:\narray([ 0.94348, 0.99902,, 0.99658, 0.99609, 0.89209, 0.66406, 0.97754, 0.97705, 0.95654, 0.95898, 0.94873, 0.9502, 0.99219, 0.99219,\n 0.98584, 0.98633, 0.92285, 0.9248])\n\nBut the paper tells that are 17 keypoints.\nIn this repo https://github.com/retkowsky/Human_pose_estimation_with_YoloV7/blob/main/Human_pose_estimation_YoloV7.ipynb tells that the keypoints are the following:\n\nbut that shape doesn't match the prediction:\n\nIs the first image right about the keypoints?\nand what are the first four digits?\n 0, 0, 430.44, 476.19\n\nThanks\nEDIT\nThis is not a complet answer but editing the plot function I can get the following information\nGiven the following output keypoint:\narray([[ 0, 0, 312.31, 486, 291.75, 916.5, 0.94974, 304.5, 118.75, 0.99902, 320.75, 102.25, 0.99756, 287.75, 103.25, 0.99658, 345, 112, 0.96338, 268.25, 115.25, 0.69531, 394,\n 226.25, 0.98145, 228.25, 230.12, 0.98389, 428.5, 358.5, 0.95898, 192.88, 364.75, 0.96533, 407, 464.25, 0.95166, 215.75, 464.25, 0.9585, 363.75, 491, 0.99219, 257.75, 491.5, 0.99268,\n 361.5, 680, 0.9834, 250.88, 679, 0.98438, 361, 861.5, 0.91064, 247, 863, 0.91504]])\n\nfrom this position ouput[7:] you can get the points of each keypoint, with the following sort as you can see in the image\n\narray([ 304.5, 118.75, 0.99902, 320.75, 102.25, 0.99756, 287.75, 103.25, 0.99658, 345, 112, 0.96338, 268.25, 115.25, 0.69531, 394, 226.25, 0.98145, 228.25, 230.12, 0.98389, 428.5, 358.5, 0.95898,\n 192.88, 364.75, 0.96533, 407, 464.25, 0.95166, 215.75, 464.25, 0.9585, 363.75, 491, 0.99219, 257.75, 491.5, 0.99268, 361.5, 680, 0.9834, 250.88, 679, 0.98438, 361, 861.5, 0.91064,\n 247, 863, 0.91504])\n\nbut I am not sure about what are the rest of the values:\n0, 0, 312.31, 486, 291.75, 916.5, 0.94974,"} +{"id": "000616", "text": "I have trained a YOLOv7 model using the Roboflow notebook and my own dataset: https://colab.research.google.com/drive/1X9A8odmK4k6l26NDviiT6dd6TgR-piOa\nI worked with these notebooks before, but never had images with more than 100 objects, but now, I have trained a model to detect microbiologic colonies, and the model is detecting up to a max of 100 objects.\nAt first I thought it was a problem of the network not being able of detecting all the objects because it was not well trained or due to precision (My dataset is composed of 500 images, and the final accuracy is about 80%).\nBut, I have some images that has between 15-30 objects, and it detects all fine. In my images that objects are clearly more than 100, the network always counts up to 100 objects, never more.\nIs there any limit to yolov7 in object quantity? Or maybe a parameter that has to be changed in training phase?"} +{"id": "000617", "text": "I have trained a YOLOv8 object detection model using a custom dataset, and I want to convert it to a Core ML model so that I can use it on iOS.\nAfter exporting the model, I have a converted model to core ml, but I need the coordinates or boxes of the detected objects as output in order to draw rectangular boxes around the detected objects.\nAs a beginner in this area, I am unsure how to achieve this. Can anyone help me with this problem?\nTraining model:\n!yolo task=detect mode=train model=yolov8s.pt data= data.yaml epochs=25 imgsz=640 plots=True\n\nValidation:\n!yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml\n\nExport this model to coreML:\n!yolo mode=export model=runs/detect/train/weights/best.pt format=coreml\n\nHow can I get the co ordinate output?"} +{"id": "000618", "text": "def convert_coco_to_yolov8(coco_file):\n\n with open(coco_file) as f:\n coco = json.load(f)\n\n\n images = coco['images']\n annotations = coco['annotations'] \n categories = {cat['id']: cat['name'] for cat in coco['categories']}\n\n \n os.makedirs('labels', exist_ok=True)\n\n \n for image in tqdm(images, desc='Converting images'):\n image_id = image['id']\n filename = image['file_name']\n\n\n\n label_filename = filename.split('.png')[0]\n label_path = os.path.join('labels', f'{label_filename}.txt')\n with open(label_path, 'w') as f:\n\n for ann in annotations:\n if ann['image_id'] != image_id:\n continue\n\n img_width = image['width']\n img_height = image['height']\n\n xmin, ymin, width, height = ann['bbox']\n \n xmax = xmin + width\n ymax = ymin + height\n xcen = (xmin + xmax) / 2\n ycen = (ymin + ymax) / 2\n # xcen = (xmin + xmax) / 2 / img_width\n # ycen = (ymin + ymax) / 2 / img_height\n w = xmax - xmin\n h = ymax - ymin\n label = categories[ann['category_id']]\n label_id = ann['category_id']\n \n \n\n segmentation_points_list = []\n for segmentation in ann['segmentation']:\n segmentation_points = [str(point / img_width) for point in segmentation]\n segmentation_points_list.append(' '.join(segmentation_points))\n segmentation_points_string = ' '.join(segmentation_points_list)\n\n\n \n\n \n line = '{} {} {} {} {} {}\\n'.format(label_id, xcen / img_width, ycen / img_height, w / img_width, h / img_height, segmentation_points_string )\n f.write(line)\n\nthe script is getting the labels but when i train for YOLOv8 the labels are seems wrong ,I need to convert a coco json to YOLOV8 txt file . label should contain segmentation also. Note my JSON file have different image size for all images"} +{"id": "000619", "text": "I'm training YOLOv8 in Colab on a custom dataset. How can I save the model after some epochs and continue the training later. I did the first epoch like this:\nimport torch\n\nmodel = YOLO(\"yolov8x.pt\")\nmodel.train(data=\"/image_datasets/Website_Screenshots.v1-raw.yolov8/data.yaml\", epochs=1)\n\nWhile looking for the options it seems that with YOLOv5 it would be possible to save the model or the weights dict. I tried these but either the save or load doesn't seem to work in this case:\ntorch.save(model, 'yolov8_model.pt')\ntorch.save(model.state_dict(), 'yolov8x_model_state.pt')"} +{"id": "000620", "text": "I have a laptop with following configurations\n Processor : AMD Ryzen 7 4800H with Radeon Graphics 2.90 GHz\n Installed RAM : 16.0 GB (15.4 GB usable)\n Windows Edition : Windows 11 Home Single Language\n Version : 22H2\n OS : 22621.1555\n NVIDIA GTX GEFORCE 1650 GRAPHICS CARD\n NVIDIA DRIVER : 31.0.15.3161\n\nIts a brand new laptop\nWith following Python and CUDA installed:\nPython 3.10.11\n\n---------------------------------------------------------------------------------------+\n| NVIDIA-SMI 531.61 Driver Version: 531.61 CUDA Version: 12.1 |\n|-----------------------------------------+----------------------+----------------------+\n| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|=========================================+======================+======================|\n| 0 NVIDIA GeForce GTX 1650 WDDM | 00000000:01:00.0 Off | N/A |\n| N/A 44C P0 15W / N/A| 0MiB / 4096MiB | 0% Default |\n| | | N/A |\n+-----------------------------------------+----------------------+----------------------+\n\n+---------------------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=======================================================================================|\n| No running processes found |\n+---------------------------------------------------------------------------------------+\n\nBut whenever I try to run my YOLOV8 model for object detection on this, it shuts down during 1st epoch only. Not sure why its happening. Any help is highly appreciated.\nMy Python code\nimport tensorflow as tf\nfrom ultralytics import YOLO\nprint(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))\nprint(tf.test.is_built_with_cuda())\nprint(tf.config.list_physical_devices('GPU'))\n\n# Create a TensorFlow session with GPU growth enabled\nconfig = tf.compat.v1.ConfigProto()\nconfig.gpu_options.allow_growth = True\nsess = tf.compat.v1.Session(config=config)\nphysical_devices = tf.config.list_physical_devices('GPU')\ntf.config.experimental.set_memory_growth(physical_devices[0], True)\nprint(tf.test.is_built_with_cuda())\n# Run your code in the session\nwith sess.as_default():\n # Load the model.\n model = YOLO('yolov8n.pt')\n \n # Training.\n results = model.train(\n data='data.yaml',\n imgsz=640,\n epochs=5,\n batch=8,\n name='yolov8n_custom')\n\nPS I would also like to know what are the hardware requirements for YOLOV8"} +{"id": "000621", "text": "We are trying to get the detected object names using Python and YOLOv8 with the following code.\nimport cv2\nfrom ultralytics import YOLO\n\n\ndef main():\n cap = cv2.VideoCapture(0)\n cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)\n cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)\n\n model = YOLO(\"yolov8n.pt\")\n\n while True:\n ret, frame = cap.read()\n result = model(frame, agnostic_nms=True)[0]\n\n print(result)\n\n if cv2.waitKey(30) == 27:\n break\n\n cap.release()\n cv2.destroyAllWindows()\n\n\nif __name__ == \"__main__\":\n main()\n\n\nThe following two types are shown on the log.\n0: 384x640 1 person, 151.2ms\nSpeed: 0.6ms preprocess, 151.2ms inference, 1.8ms postprocess per image at shape (1, 3, 640, 640)\n\nThe second log is the one we displayed using print, how do we get the person from now on? Presumably we get the person by giving 0 to the names, but where do we get the 0 from?\nultralytics.yolo.engine.results.Results object with attributes:\n\nboxes: ultralytics.yolo.engine.results.Boxes object\nkeypoints: None\nkeys: ['boxes']\nmasks: None\nnames: {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}\norig_img: array([[[51, 58, 64],\n [52, 59, 65],\n [54, 59, 65],\n ...,\n [64, 68, 74],\n [62, 67, 73],\n [62, 67, 73]],\n\n [[51, 58, 64],\n [53, 59, 65],\n [54, 59, 65],\n ...,\n [63, 68, 74],\n [62, 67, 73],\n [62, 67, 73]],\n\n [[53, 58, 64],\n [53, 58, 64],\n [53, 58, 64],\n ...,\n [61, 67, 73],\n [61, 67, 73],\n [61, 67, 73]],\n\n ...,\n\n [[43, 48, 58],\n [42, 47, 57],\n [41, 46, 56],\n ...,\n [24, 35, 49],\n [23, 34, 48],\n [23, 34, 48]],\n\n [[44, 48, 59],\n [43, 47, 57],\n [42, 46, 56],\n ...,\n [26, 35, 49],\n [26, 35, 49],\n [24, 33, 48]],\n\n [[45, 48, 59],\n [43, 45, 56],\n [40, 43, 54],\n ...,\n [26, 35, 49],\n [26, 35, 49],\n [25, 33, 48]]], dtype=uint8)\norig_shape: (720, 1280)\npath: 'image0.jpg'\nprobs: None\nspeed: {'preprocess': 1.6682147979736328, 'inference': 79.47301864624023, 'postprocess': 1.0020732879638672}\n\nWe would like to know the solution in this way. But if it is not possible, we can use another method if it is a combination of Python and YOLOv8. We plan to display bounding boxes and object names.\nAdditional Information\nI changed the code as follows.\n ret, frame = cap.read()\n # result = model(frame, agnostic_nms=True)[0]\n result = model([frame])[0]\n\n boxes = result.boxes\n masks = result.masks\n probs = result.probs\n\n print(\"[boxes]==============================\")\n print(boxes)\n print(\"[masks]==============================\")\n print(masks)\n print(\"[probs]==============================\")\n print(probs)\n\nAfter all, the following person is not included. How should we determine that?\n[boxes]==============================\nWARNING \u26a0\ufe0f 'Boxes.boxes' is deprecated. Use 'Boxes.data' instead.\nultralytics.yolo.engine.results.Boxes object with attributes:\n\nboxes: tensor([[4.7356e+01, 7.2858e+00, 1.1974e+03, 7.1092e+02, 8.6930e-01, 0.0000e+00]])\ncls: tensor([0.])\nconf: tensor([0.8693])\ndata: tensor([[4.7356e+01, 7.2858e+00, 1.1974e+03, 7.1092e+02, 8.6930e-01, 0.0000e+00]])\nid: None\nis_track: False\norig_shape: tensor([ 720, 1280])\nshape: torch.Size([1, 6])\nxywh: tensor([[ 622.4028, 359.1004, 1150.0942, 703.6293]])\nxywhn: tensor([[0.4863, 0.4988, 0.8985, 0.9773]])\nxyxy: tensor([[ 47.3557, 7.2858, 1197.4500, 710.9150]])\nxyxyn: tensor([[0.0370, 0.0101, 0.9355, 0.9874]])\n[masks]==============================\nNone\n[probs]==============================\nNone"} +{"id": "000622", "text": "I'm currently working in a project in which I'm using Flask and Yolov8 together.\nWhen I run this code\nfrom ultralytics import YOLO\n\nmodel = YOLO(\"./yolov8n.pt\")\n\nresults = model.predict(source=\"../TEST/doggy.jpg\", save=True, save_txt=True)\n\nthe output will be saved in this default directory /run/detect/\nlike\nUltralytics YOLOv8.0.9 Python-3.10.8 torch-2.0.0+cpu CPU\nFusing layers... \nYOLOv8n summary: 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\nResults saved to d:\\runs\\detect\\predict4\n1 labels saved to d:\\runs\\detect\\predict4\\labels\n\nand what I want is the predict directory number or the entire directory path in a variable.\nI tried capturing the path using sys.stdout methods but i want a direct solution."} +{"id": "000623", "text": "I want to segment an image using yolo8 and then create a mask for all objects in the image with specific class.\nI have developed this code:\nimg=cv2.imread('images/bus.jpg')\nmodel = YOLO('yolov8m-seg.pt')\nresults = model.predict(source=img.copy(), save=False, save_txt=False)\nclass_ids = np.array(results[0].boxes.cls.cpu(), dtype=\"int\")\nfor i in range(len(class_ids)):\n if class_ids[i]==0:\n empty_image = np.zeros((height, width,3), dtype=np.uint8)\n res_plotted = results[0][i].plot(boxes=0, img=empty_image)\n\n\nIn the above code, res_plotted is the mask for one object, in RGB. I want to add all of these images to each other and create a mask for all objects with class 0 (it is a pedestrian in this example)\nMy questions:\n\nHow can I complete this code?\nIs there any better way to achieve this without having a loop?\nIs there any utility function in the yolo8 library to do this?"} +{"id": "000624", "text": "I have this output that was generated by model.predict()\n0: 480x640 1 Hole, 234.1ms\nSpeed: 3.0ms preprocess, 234.1ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 640)\n\n0: 480x640 1 Hole, 193.6ms\nSpeed: 3.0ms preprocess, 193.6ms inference, 3.5ms postprocess per image at shape (1, 3, 640, 640)\n\n...\n\nHow do I hide the output from terminal?\nI can't find out the information in this official link\nhttps://docs.ultralytics.com/modes/predict/#arguments"} +{"id": "000625", "text": "I am trying to draw a segmentation mask from a YOLO segmentation mask dataset. The annotation line I am reading looks like this:\n36 0.6158357764423077 0.814453125 0.6158357764423077 0.8095703125 0.6070381225961539 0.8095703125 0.6041055721153846 0.8115234375 0.5894428149038462 0.8154296875 0.5747800576923077 0.8125 0.5513196490384615 0.8134765625 0.5483870961538462 0.81640625 0.5923753653846154 0.818359375 0.6158357764423077 0.814453125\nI am using cv2.polylines to draw the shape but am getting an error:\nimage_height, image_width, c = img.shape\nisClosed = True\ncolor = (255, 0, 0)\nthickness = 2\n\nwith open(annotation_file) as f:\n for line in f:\n split_line = line.split()\n class_id = split_line[0]\n mask_shape = [float(numeric_string) for numeric_string in split_line[1:len(split_line)]]\n mask_points = []\n for i in range(0,len(mask_shape),2):\n x,y = mask_shape[i:i+2]\n mask_points.append((x * image_width, y * image_height))\n points = np.array([mask_points])\n image = cv2.polylines(img, points,\n isClosed, color, thickness)\n break\n\nError:\nOpenCV(4.7.0) /Users/xperience/GHA-OCV-Python/_work/opencv-python/opencv-python/opencv/modules/imgproc/src/drawing.cpp:2434: error: (-215:Assertion failed) p.checkVector(2, CV_32S) >= 0 in function 'polylines'"} +{"id": "000626", "text": "getting IndexError: index 1 is out of bounds for dimension 1 with size 1 while training yolo v8\nI am trying defect detection with yolov8 and i am expecting a detection of those defect on full scale image\nInput is 416 * 416 image\nusing Python 3.8.6,windows11 pro\ndata.yaml file:\npath: /Users/pksha/yolo\ntrain: train/images\nval: val/images\nnc: 0\nnames: ['particle']\n**python code:\npip install ultralytics\nfrom ultralytics import YOLO\n# LOAD model\n model=YOLO(\"yolov8l.yaml\")\n results=model.train(data=\"data.yaml\",epochs=1)\n\nI have tried both increasing and decresing to check if it works but both showed error."} +{"id": "000627", "text": "My project aims to detect object labels and coordinates and then convert them into a string which is converted into voice using gTTS but I keep getting an attribute error in the prediction labels. I am new to this framework, any help will be appreciated.\nCode:\nimport cv2\nfrom gtts import gTTS\nimport os\nfrom ultralytics import YOLO\n\ndef convert_labels_to_text(labels):\n text = \", \".join(labels)\n return text\n\nclass YOLOWithLabels(YOLO):\n def __call__(self, frame):\n results = super().__call__(frame)\n labels = results.pred[0].get_field(\"labels\").tolist()\n annotated_frame = results.render()\n return annotated_frame, labels\n\ncap = cv2.VideoCapture(0)\nmodel = YOLOWithLabels('yolov8n.pt')\n\nwhile cap.isOpened():\n success, frame = cap.read()\n\n if success:\n annotated_frame, labels = model(frame)\n\n message = convert_labels_to_text(labels)\n\n tts_engine = gTTS(text=message) # Initialize gTTS with the message\n\n tts_engine.save(\"output.mp3\")\n os.system(\"output.mp3\")\n\n cv2.putText(annotated_frame, message, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)\n cv2.imshow(\"YOLOv8 Inference\", annotated_frame)\n\n if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n break\n\n else:\n break\n\ncap.release()\ncv2.destroyAllWindows()\n\nError\nFile \"C:\\Users\\alien\\Desktop\\YOLOv8 project files\\gtts service\\testservice.py\", line 13, in __call__\n labels = results.pred[0].get_field(\"labels\").tolist()\n ^^^^^^^^^^^^\nAttributeError: 'list' object has no attribute 'pred'\n\nprint(results)\norig_shape: (480, 640)\npath: 'image0.jpg'\nprobs: None\nsave_dir: None\nspeed: {'preprocess': 3.1604766845703125, 'inference': 307.905912399292, 'postprocess': 2.8924942016601562}]\n0: 480x640 1 person, 272.4ms\nSpeed: 3.0ms preprocess, 272.4ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 640)\n[ultralytics.yolo.engine.results.Results object with attributes:\n boxes: ultralytics.yolo.engine.results.Boxes object\n keypoints: None\n keys: ['boxes']\n masks: None\n names: {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}\n orig_img: array([[[168, 167, 166],\n [165, 165, 165],\n [165, 166, 167],\n ...,\n [183, 186, 178],\n [183, 186, 178],\n [184, 187, 179]],\n\n [[168, 167, 165],\n [166, 165, 165],\n [166, 167, 166],\n ...,\n [184, 187, 179],\n [183, 186, 178],\n [184, 187, 179]],\n\n [[168, 167, 164],\n [167, 167, 164],\n [167, 167, 165],\n ...,\n [184, 187, 178],\n [184, 187, 179],\n [183, 186, 178]],\n\n ...,\n\n [[196, 192, 185],\n [196, 192, 185],\n [196, 192, 185],\n ...,\n [ 25, 29, 38],\n [ 22, 25, 35],\n [ 20, 24, 34]],\n\n [[199, 195, 187],\n [197, 193, 186],\n [197, 193, 186],\n ...,\n [ 23, 26, 35],\n [ 22, 25, 35],\n [ 22, 25, 35]],\n\n [[199, 195, 187],\n [199, 195, 187],\n [199, 195, 187],\n ...,\n [ 20, 24, 33],\n [ 19, 23, 33],\n [ 19, 23, 33]]], dtype=uint8)"} +{"id": "000628", "text": "I have a YOLOv8 object detection model trained on custom. It takes image as input and annotates the different objects my question is How do I get coordinates of different objects? I want these coordinate data to further crop the images.\nIf the object detected is a person I want coordinates of that same for cat and dog.\nThanks for help in advance.\nMy code looks something like this\ninfer = YOLO(r'path_to_trained_model.pt')\nimage_path = r'image'\nresults = infer(image_path)\n\nclass_name = \"Person\" \nconfidence_threshold = 0.5 \n\nfor r in results:\n r.boxes.xyxy ## I was also getting this error 'list' object has no attribute 'xyxy'\n\n\nI know I am missing quite a bit of data here since I have no idea how it works."} +{"id": "000629", "text": "I want to train the YOLO v8 in transfer learning on my custom dataset.\nI have different classes than the base training on the COCO dataset.\nYet I don't want to learn again the feature extraction.\nHence I though following the Ultralytics YOLOv8 Docs - Train.\nYet, When I train on my small dataset I want to freeze the backbone.\nHow can I do that?\nI looked at the documentation and couldn't find how to do so."} +{"id": "000630", "text": "I am working on an Android app where I am already using OpenCV, I got a model which is in onnx format from YOLOv8 after conversion. Here is the output metadata of it.\n\nname - output0\ntype - float32[1,5,8400]\n\nSo far I am successfully running the model but in the end, the output that I got I can't comprehend.\nThis is the print statement from the output\nMat [ 1* 5* 8400*CV_32FC1, isCont=true, isSubmat=true, nativeObj=0x72345b4840, dataAddr=0x723076b000 ]\nclass Detector(private val context: Context) {\n private var net: Net? = null\n\n fun detect(frame: Bitmap) {\n // preprocess image\n val mat = Mat()\n Utils.bitmapToMat(resizedBitmap, mat)\n Imgproc.cvtColor(mat, mat, Imgproc.COLOR_RGBA2RGB)\n val inputBlob = Dnn.blobFromImage(mat, 1.0/255.0, Size(640.0, 640.0), Scalar(0.0), true, false)\n net?.setInput(inputBlob)\n val outputBlob = net?.forward() ?: return\n println(outputBlob)\n }\n\n fun setupDetector() {\n val modelFile = File(context.cacheDir, MODEL_NAME)\n if (!modelFile.exists()) {\n try {\n val inputStream = context.assets.open(MODEL_NAME)\n val size = inputStream.available()\n val buffer = ByteArray(size)\n inputStream.read(buffer)\n inputStream.close()\n val outputStream = FileOutputStream(modelFile)\n outputStream.write(buffer)\n outputStream.close()\n net = Dnn.readNetFromONNX(modelFile.absolutePath)\n } catch (e: Exception) {\n throw RuntimeException(e)\n }\n } else {\n net = Dnn.readNetFromONNX(modelFile.absolutePath)\n }\n }\n\n companion object {\n private const val MODEL_NAME = \"model.onnx\"\n private const val TENSOR_WIDTH = 640\n private const val TENSOR_HEIGHT = 640\n }\n}\n\nWhat could be the general approach to get bounding box, the confidence score and class labels? And if you have any solution for onnx model with OpenCV then you can provide as well. Also this question isn't android specific."} +{"id": "000631", "text": "I have found a solution but the problem is I have way too many points and I want to reduce them. Which is not a major issue, but still I would like to have fewer points.\nI have searched for other answers here but I have found nothing pertaining to my issue. I have attached a sample mask for reference.\nThe sample binarised mask\nJust to be clear, I have written a script in Python that does just that. The steps I had taken were:\n\nConvert the colorised mask to a binarized one (since I have only one class).\nApply contours on the image and find the edge points of the mask.\nRavel the detected contour points.\nBased on major shifts in angles, noted down only those points.\nSave all the points in YOLO segmentation text format (cls_name x1 y1 x2 y2 .... xn yn)\n\nEDIT: I did manage to do all these. But in the end I came to a horrible realization that, because it considers the outside borders, and if the labels are all overlapped, it becomes one big blob that contains unwanted region too. So I had to drop this and use those masks in UNET where I am getting the outputs I desire."} +{"id": "000632", "text": "What is the use of imgsz in inference on Yolov8 model ?\nLooking at current documentation's example, we can write :\nmodel.predict(source, save=True, imgsz=320, conf=0.5)\n(https://docs.ultralytics.com/modes/predict/#inference-sources).\nIt is not documented as argument, but can be passed.\nCan it differ from the value used in training ?"} +{"id": "000633", "text": "Get interested in yolov8 and after few youtube tutorials i tried to train custom dataset. After all manipulations i got no prediction results :( 2nd image - val_batch0_labels, 3rd image - val_batch0_pred\n\n\n\nI tried to do this in pycharm and google colab (same results) and here's the code:\n# main.py\nfrom ultralytics import YOLO\n\n\nmodel = YOLO(\"yolov8n.yaml\")\n\nresults = model.train(data=\"config.yaml\", epochs=1)\n\n# config.yaml\npath: Y:\\coding\\python\\yolo_test\\data_airplane\\data # dataset root dir\ntrain: images # train images (relative to 'path')\nval: images # val images (relative to 'path')\n\n# Classes\nnames:\n 0: airplane\n\nAnd for more information here's all what i get after run this:\n from n params module arguments \n\n0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]\n1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]\n2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]\n3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]\n4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]\n5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]\n6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]\n7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]\n8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]\n9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]\n10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']\n11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]\n12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]\n13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']\n14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]\n15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]\n16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]\n17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]\n18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]\n19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]\n20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]\n21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]\n22 [15, 18, 21] 1 897664 ultralytics.nn.modules.head.Detect [80, [64, 128, 256]]\nYOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients\nUltralytics YOLOv8.0.141 Python-3.10.8 torch-2.0.1+cpu CPU (Intel Core(TM) i3-10105F 3.70GHz)\nengine\\trainer: task=detect, mode=train, model=yolov8n.yaml, data=config.yaml, epochs=1, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, tracker=botsort.yaml, save_dir=runs\\detect\\train22\nOverriding model.yaml nc=80 with nc=1\n from n params module arguments \n\n0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]\n1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]\n2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]\n3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]\n4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]\n5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]\n6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]\n7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]\n8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]\n9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]\n10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']\n11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]\n12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]\n13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']\n14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]\n15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]\n16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]\n17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]\n18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]\n19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]\n20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]\n21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]\n22 [15, 18, 21] 1 751507 ultralytics.nn.modules.head.Detect [1, [64, 128, 256]]\nYOLOv8n summary: 225 layers, 3011043 parameters, 3011027 gradients\ntrain: Scanning Y:\\coding\\python\\yolo_test\\data_airplane\\data\\labels.cache... 3 images, 0 backgrounds, 0 corrupt: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3/3 [00:00 threshold:\n cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)\n cv2.putText(frame, results.names[int(class_id)].upper(), (int(x1), int(y1 - 10)),\n cv2.FONT_HERSHEY_SIMPLEX, 1.3, (0, 255, 0), 3, cv2.LINE_AA)\n\n box_center_x = (x1 + x2) / 2\n box_center_y = (y1 + y2) / 2\n screen_width = win32api.GetSystemMetrics(0)\n screen_height = win32api.GetSystemMetrics(1)\n\n # moving crosshair to model box\n target_x = int(screen_width * box_center_x / W)\n target_y = int(screen_height * box_center_y / H)\n pydirectinput.moveTo(target_x, target_y)\n \n cv2.imshow(\"COMPUTER VISION\",frame)\n\n if cv2.waitKey(1) == ord('q'):\n cv2.destroyAllWindows()"} +{"id": "000637", "text": "I'm trying to hide the bounding boxes and the labels after a prediction in YOLOv8. I've set the necessary attributes but I still see the bounding boxes and labels in the final render.show(). What am I doing wrong?\n# load model\nmodel = YOLO('ultralyticsplus/yolov8m')\n\n# set model parameters\nconf = 0.2\niou = 0.5\nag_nms = False\n\n# try to turn off bounding boxes and labels\nmodel.overrides['hide_labels'] = True\nmodel.overrides['hide_conf'] = True\nmodel.overrides['show'] = False\n\n# set image\nimage = 'pic.jpg'\n\n# perform inference\nresults = model.predict(\n image,\n show=False,\n hide_labels=True,\n hide_conf=True,\n conf=conf,\n iou=iou,\n)\n\n# observe results\nrender = render_result(model=model, image=image, result=results[0])\nrender.show() # still sees the bounding boxes here"} +{"id": "000638", "text": "I'm using the Ultralytics YOLOv8 implementation to perform object detection on an image. However, when I try to retrieve the classification probabilities using the probs attribute from the results object, it returns None. Here's my code:\nfrom ultralytics import YOLO\n\n# Load a model\nmodel = YOLO('yolov8n.pt') # pretrained YOLOv8n model\n\n# Run batched inference on a list of images\nresults = model('00000.png') # return a list of Results objects\n\n# Process results list\nfor result in results:\n boxes = result.boxes # Boxes object for bbox outputs\n masks = result.masks # Masks object for segmentation masks outputs\n keypoints = result.keypoints # Keypoints object for pose outputs\n probs = result.probs # Probs object for classification outputs\n\nprint(probs)\n\nWhen I run the above code, the output for print(probs) is None. The remaining output is\nimage 1/1 00000.png: 640x640 1 person, 1 zebra, 7.8ms\nSpeed: 2.6ms preprocess, 7.8ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 640)\n\nWhy is the probs attribute returning None, and how can I retrieve the classification probabilities for each detected object? Is there a specific design reason behind this behavior in the Ultralytics YOLOv8 implementation?"} +{"id": "000639", "text": "this is the code\nfrom ultralytics import YOLO\nlicense_plate_detector = YOLO('./model/best.pt')\nlicense_plates = license_plate_detector('./42.jpg')\n\nand this the output\n640x608 1 number-plate, 342.0ms\nSpeed: 12.4ms preprocess, 342.0ms inference, 3.0ms postprocess per image at shape (1, 3, 640, 608)\n\ni want to convert this output to image and save it to use with esayocr\nthe class don't have any save method so how to do this"} +{"id": "000640", "text": "I am trying to export .engine from onnx for the pretrained yolov8m model but get into trtexec issue. Note that I am targeting for a model supporting dynamic batch-size.\nI got the onnx by following the official instructions from ultralytics.\nfrom ultralytics import YOLO\n\n# Load a model\nmodel = YOLO('yolov8m.pt') # load an official model\nmodel = YOLO('path/to/best.pt') # load a custom trained\n\n# Export the model\nmodel.export(format='onnx',dynamic=True) # Note the dynamic arg\n\nI get the corresponding onnx. Now when I try to run trtexec\ntrtexec --onnx=yolov8m.onnx --workspace=8144 --fp16 --minShapes=input:1x3x640x640 --optShapes=input:2x3x640x640 --maxShapes=input:10x3x640x640 --saveEngine=my.engine\n\nI get\n[08/10/2023-23:53:10] [I] TensorRT version: 8.2.5\n[08/10/2023-23:53:11] [I] [TRT] [MemUsageChange] Init CUDA: CPU +336, GPU +0, now: CPU 348, GPU 4361 (MiB)\n[08/10/2023-23:53:11] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 348 MiB, GPU 4361 MiB\n[08/10/2023-23:53:12] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 483 MiB, GPU 4393 MiB\n[08/10/2023-23:53:12] [I] Start parsing network model\n[08/10/2023-23:53:12] [I] [TRT] ----------------------------------------------------------------\n[08/10/2023-23:53:12] [I] [TRT] Input filename: yolov8m.onnx\n[08/10/2023-23:53:12] [I] [TRT] ONNX IR version: 0.0.8\n[08/10/2023-23:53:12] [I] [TRT] Opset version: 17\n[08/10/2023-23:53:12] [I] [TRT] Producer name: pytorch\n[08/10/2023-23:53:12] [I] [TRT] Producer version: 2.0.1\n[08/10/2023-23:53:12] [I] [TRT] Domain: \n[08/10/2023-23:53:12] [I] [TRT] Model version: 0\n[08/10/2023-23:53:12] [I] [TRT] Doc string: \n[08/10/2023-23:53:12] [I] [TRT] ----------------------------------------------------------------\n[08/10/2023-23:53:12] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.\n[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:773: While parsing node number 305 [Range -> \"/model.22/Range_output_0\"]:\n[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:774: --- Begin node ---\n[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:775: input: \"/model.22/Constant_8_output_0\"\ninput: \"/model.22/Cast_output_0\"\ninput: \"/model.22/Constant_9_output_0\"\noutput: \"/model.22/Range_output_0\"\nname: \"/model.22/Range\"\nop_type: \"Range\"\n\n \n\n[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:776: --- End node ---\n[08/10/2023-23:53:12] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3353 In function importRange:\n[8] Assertion failed: inputs.at(0).isInt32() && \"For range operator with dynamic inputs, this version of TensorRT only supports INT32!\"\n[08/10/2023-23:53:12] [E] Failed to parse onnx file\n[08/10/2023-23:53:12] [I] Finish parsing network model\n[08/10/2023-23:53:12] [E] Parsing model failed\n[08/10/2023-23:53:12] [E] Failed to create engine from model.\n\nI am aware that some people suggest upgrading to latest TRT version but I am looking for an alternate solution."} +{"id": "000641", "text": "I have a YOLOv7 model trained on my custom dataset. I exported the model to TensorFlow lite successfully and was able to use it for inference in Python. But when I try to use the same model in Android, using the object detection project with TensorFlow lite, it throws this error:\njava.lang.IllegalArgumentException: Error occurred when initializing ObjectDetector: The input tensor should have dimensions 1 x height x width x 3. Got 1 x 3 x 640 x 640.\nIs it possible to change the input shape for the ObjectDetector class, or export the YOLOv7 or YOLOv5 model with corresponding input shape?\nI tried to tweak the export process to change the input shape of ONNX model which is the intermediate model in exporting from PyTorch to TensorFlow Lite but it throws this error:\nONNX export failure: Given groups=1, weight of size [32, 3, 3, 3], expected input[1, 640, 640, 3] to have 3 channels, but got 640 channels instead\nupdate: I used onnx2tf to export .tflite model with NHWC input shape. Now the Android project throws this error:\njava.lang.RuntimeException: Error occurred when initializing ObjectDetector: Input tensor has type kTfLiteFloat32: it requires specifying NormalizationOptions metadata to preprocess input images.\nI couldn't find a way to add normalization options to metadata using this doc. Any solutions?"} +{"id": "000642", "text": "I have a trained model and I have detected my required object using following code\nimport cv2\nfrom PIL import Image\nfrom ultralytics import YOLO\n\nimage = cv2.imread(\"screenshot.png\")\nmodel = YOLO('runs/detect/train4/weights/best.pt')\nresults = model.predict(image, show=True, stream=True, classes=0, imgsz=512)\nfor result in results:\n for box in result.boxes:\n class_id = result.names[box.cls[0].item()]\n if (class_id == \"myclassname\"): \n cords = box.xyxy[0].tolist()\n cords = [round(x) for x in cords]\n conf = round(box.conf[0].item(), 2)\n print(\"Object type:\", class_id)\n print(\"Coordinates:\", cords)\n print(\"Probability:\", conf)\n print(\"---\")\n\nFrom this detected portion of image I need to detect an other class how I can do that?\nI have searched enough but I could not see any post for this."} +{"id": "000643", "text": "I trained a model to detect traffic lights and classify thier color, [red,green,yellow,off].\nI want to only display the lights with a confidence above 50%, but I cant figure out how to do that with yolo v8. Ive tried using .conf and .prob as the documentation states but its all empty. Here is my current script. It streams webcam view and looks for stoplights.\nAny help to get confidence values or even just the classification values from this would be amazing. I have uploaded the model to github here for people that want to test.\nI reccomend using the best_traffic_nano_yolo.pt model as its the most lightweight.\nGoogling traffic light at intersection and holding an your phone up to the webcam should be enough for the model to detect and classify the light.\nimport cv2\nfrom PIL import Image\nfrom ultralytics import YOLO\n\n# Load a pretrained YOLOv8n model\nmodel = YOLO('/Models/best_traffic_nano_yolo.pt')\n\n# open a video file or start a video stream\ncap = cv2.VideoCapture(0) # replace with 0 for webcam\n\nwhile cap.isOpened():\n # Capture frame-by-frame\n ret, frame = cap.read()\n if not ret:\n break\n\n # flip the image\n # frame = cv2.flip(frame, -1) \n\n # Run inference on the current frame\n results = model(frame) # results list\n\n for r in results:\n frame = r.plot()\n\n # Display the resulting frame\n cv2.imshow('frame', frame)\n \n # Press 'q' on keyboard to exit\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n# After the loop release the cap object and destroy all windows\ncap.release()\ncv2.destroyAllWindows()"} +{"id": "000644", "text": "I am currently working with Ultralytics Yolov8 model for object detection.\nI trained my model with a custom dataset. I annotated my dataset on Roboflow and exported in Yolov8 format.\nThe training result looks good. However there is a mismatch between the coordinates of bounding boxes of my manual annotation on Roboflow and that of prediction.\nFor example I have this image called image_1.jpg which is annotated on Roboflow and is part of training set. Roboflow defined the coordinates (as the part of the label):\n\n[0 0.36953125 0.39609375 0.54765625 0.7875]\n\nwhere the last 4 numbers showing the x1,x2,y1,y2 coordinates of the bounding boxes. However when I passed this image into predict method (just for checking) using the following piece of code\nmodel = YOLO(\u2018best_weights.pt\u2019)\nresult = model.predict(path_of_example_image, save = True)\nboxes = result[0].boxes.xyxy\n\nI got the coordinates:\n\ntensor([[ 22.5558, 9.7429, 619.7601, 345.7614]])\n\nWhen I draw the bounding box with the predicted coordinates, the area covers the object completely (with .99% accuracy on the (correctly) predicted class). On the other hand, when I draw the bounding box with the coordinates from manual annotation, the object is not covered at all as, obviously, all coordinates are below 1. So I just wonder what kind of preprocessing is applied within the model so that the coordinates from Roboflow's labelling make sense and my training result is actually good?"} +{"id": "000645", "text": "So, I'm trying to train a YOLO classification model on a custom dataset that contains jpg-images of two different classes. But when I launch training, I get following error:\nTraceback (most recent call last):\n File \"C:\\pythonProject\\main.py\", line 5, in \n model.train(data='./training_dataset', epochs=1, imgsz=64)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\ultralytics\\engine\\model.py\", line 341, in train\n self.trainer.train()\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\ultralytics\\engine\\trainer.py\", line 192, in train\n self._do_train(world_size)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\ultralytics\\engine\\trainer.py\", line 288, in _do_train\n self._setup_train(world_size)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\ultralytics\\engine\\trainer.py\", line 255, in _setup_train\n self.test_loader = self.get_dataloader(self.testset, batch_size=batch_size * 2, rank=-1, mode='val')\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\ultralytics\\models\\yolo\\classify\\train.py\", line 88, in get_dataloader\n dataset = self.build_dataset(dataset_path, mode)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\ultralytics\\models\\yolo\\classify\\train.py\", line 83, in build_dataset\n return ClassificationDataset(root=img_path, args=self.args, augment=mode == 'train', prefix=mode)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\ultralytics\\data\\dataset.py\", line 220, in __init__\n super().__init__(root=root)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\torchvision\\datasets\\folder.py\", line 309, in __init__\n super().__init__(\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\torchvision\\datasets\\folder.py\", line 145, in __init__\n samples = self.make_dataset(self.root, class_to_idx, extensions, is_valid_file)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\torchvision\\datasets\\folder.py\", line 189, in make_dataset\n return make_dataset(directory, class_to_idx, extensions=extensions, is_valid_file=is_valid_file)\n File \"C:\\pythonProject\\venv\\lib\\site-packages\\torchvision\\datasets\\folder.py\", line 61, in make_dataset\n directory = os.path.expanduser(directory)\n File \"C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python310\\lib\\ntpath.py\", line 293, in expanduser\n path = os.fspath(path)\nTypeError: expected str, bytes or os.PathLike object, not NoneType\n\n\nI've tried two different datasets and that problem occured on both of them.\nMy full code:\nfrom ultralytics import YOLO\n\nmodel = YOLO('yolov8n-cls.pt')\n\nmodel.train(data='./training_dataset', epochs=1, imgsz=64)\n\nDataset structure:\ntraining_dataset/train/cats;\ntraining_dataset/train/dogs"} +{"id": "000646", "text": "I have trained my YoloV8 detection model.\nAfter training the model there are plenty of files stored within in the train folder.\nOne of the files are the train_batch.jpg and the val_batch.jpg.\nThe images within the train_batch.jpg are cutted of and scrambled.\n\nThe images within the val_batch.jpg looks fine.\n\nThe imagesize of the images within train_batch and val_batch are the same.\nDoes someone know if this is a problem?\nI have tried to change the image sizes and used various batch sizes."} +{"id": "000647", "text": "What are the class IDs and their corresponding class names for YOLOv8 models? I understand there are approximately 80 classes in the object detection model of YOLOv8. However, I'm looking to specifically identify each class along with their respective class IDs. Additionally, do all YOLO models (yolov3, yolov5, yolov7, yolov8) have the same number of classes and corresponding class IDs?"} +{"id": "000648", "text": "I need some help as I will be needing this to work for my final thesis project for the Model has 2 classes inheat and non-inheat, I did the codes here below but when it tries to predict or detect by frames of the video it detects the non-inheat but it only stacks in the ininheat frames instead of the non_ inheat frames I might be doing a wrong process here for by frames in stacking them up even its detecting non-inheat it stacks to the in_heat frames.\ncodes:\ndef process_video_with_second_model(video_path):\n cap = cv2.VideoCapture(video_path)\n class_counts = {'inheat': 0, 'non-inheat': 0}\n\n in_heat_frames = []\n non_in_heat_frames = []\n\n while True:\n ret, frame = cap.read()\n if frame is None:\n break # Break the loop when no more frames are available\n\n # Resize the frame to a smaller size (e.g., 400x400)\n frame_small = cv2.resize(frame, (400, 400))\n\n # Use the second model to detect in-heat behavior\n results_in_heat = yolov8_model_in_heat.predict(source=frame_small, show=True, conf=0.8)\n\n # Print results to inspect structure\n for results_in_heat_instance in results_in_heat:\n # Access bounding box coordinates\n boxes = results_in_heat_instance.boxes\n\n # CONFIDENCE 0.5\n if len(boxes) > 0:\n class_name = results_in_heat_instance.names[0]\n\n # Use a dictionary to store the counts for each class\n class_counts[class_name] += 1\n\n # Add the frame to the corresponding list based on the class name\n if class_name == 'non-inheat':\n non_in_heat_frames.append(frame)\n elif class_name == 'inheat':\n in_heat_frames.append(frame)\n\n print(f\"Class Counts: {class_counts}\")\n\n # Check if either condition is met (50 frames for inheat and 50 frames for non-inheat)\n if class_counts['inheat'] >= 50 and class_counts['non-inheat'] >= 50:\n break\n\n # Release resources for the second model\n cap.release()\n cv2.destroyAllWindows()\n\n # Stack the in-heat and non-in-heat frames vertically\n stacked_in_heat_frames = np.vstack(in_heat_frames)\n stacked_non_in_heat_frames = np.vstack(non_in_heat_frames)\n\n # Display the stacked in-heat and non-in-heat frames\n cv2.imshow('Stacked In-Heat Frames', stacked_in_heat_frames)\n cv2.imshow('Stacked Non-In-Heat Frames', stacked_non_in_heat_frames)\n cv2.waitKey(0)\n cv2.destroyAllWindows()\n\n # Compare the counts and return the label with the higher count\n if class_counts['inheat'] > class_counts['non-inheat']:\n return 'inheat'\n elif class_counts['non-inheat'] > class_counts['inheat']:\n return 'non-inheat'\n\nI did read the YOLOv8 Documents for predicting but still cant do it. I did this in the Pycharm IDE"} +{"id": "000649", "text": "I am using Ultralytics YOLO for license plate detection, and I'm encountering an issue when trying to extract bounding box coordinates from the Results.boxes object. I have inspected the structure of the Results.boxes object, but I am having difficulty accessing the bounding box information correctly.\nclass ImageProcessing:\n def __init__(self, model_path: Path, input_image: Path, output_image: Path):\n if not isinstance(model_path, Path):\n raise TypeError(\"model_path must be a pathlib.Path instance\")\n if not isinstance(input_image, Path) or not isinstance(output_image, Path):\n raise TypeError(\"input_image and output_image must be pathlib.Path instances\")\n # Load the YOLO model from the provided path\n self.model = YOLO(str(model_path))\n self.input_image = input_image\n self.output_image = output_image\n\n def ascertain_license_plates_as_image(self, threshold: float = 0.5, fontscale: float = 1.3, color: tuple = (0, 255, 0), thickness: int = 3):\n image = opencv.imread(str(self.input_image))\n results = self.model(image)\n\n # Check if results is a list and get the first result\n if isinstance(results, list):\n results = results[0]\n\n # Iterate through each detected object\n for box in results.boxes:\n # Extract coordinates, confidence, and class ID\n x1, y1, x2, y2, conf, class_id = box.data[0][0], box.data[0][1], box.data[0][2], box.data[0][3], box.conf.item(), int(box.cls.item())\n if conf > threshold:\n opencv.rectangle(image, (int(x1), int(y1)), (int(x2), int(y2)), color, thickness)\n label = results.names[class_id].upper() if results.names else f'class {class_id}'\n opencv.putText(image, label, (int(x1), int(y1) - 10), opencv.FONT_HERSHEY_SIMPLEX, fontscale, color, thickness, opencv.LINE_AA)\n\n opencv.imwrite(str(self.output_image), image)\n return results\n\nHowever, I'm getting an IndexError, and it seems that my indexing might be incorrect for this particular Boxes object. Or even worse, cv2 is not highlighting the license plate."} +{"id": "000650", "text": "I\u2019ve custom trained a yolov8n model and I would like to run it on a Jetson Nano. Has anyone managed to do this? If yes, would you be so kind to help me out?\nI\u2019ve got a .pt of the custom trained model. The OS image offered by NVidia on their website is an Ubuntu 18.04 and I have run into many compatibility issues. I\u2019m interested in finding out if anyone has managed to get yolo running on the Jetson specifically the yolov8n model from ultralytics.\nThanks in advance!"} +{"id": "000651", "text": "I'm currently working on an object detection project using YOLOv8 and a customized dataset. In my dataset, each image is accompanied by its corresponding label file (Data_1.png -> Data_1.txt).\nThe label file Data_1.txt follows the format:\nClass_type, x_1 min, y_1 min, ..., x_4 min, y_4 min.\n\nI'm interested in applying a perspective augmentation to my dataset, and I've decided to use the YOLOv8 augmentation functionality. However, I am unsure whether YOLOv8 generates labels for the augmented data or not. If it does not, I would greatly appreciate any suggestions or alternative approaches to handle the generation of labels for the augmented data."} +{"id": "000652", "text": "How do I get the class names of segmented instances when detecting multiple classes in YOLOv8? The detections do not have a .cls attribute like here YOLOv8 get predicted class name. Also the docs do not seem to mention anything e.g. here\nWhen I use the show=true argument in the prediction function, the classes are distinguished in the resulting image, but I cannot get them programmatically. My code that gets me all detections I wanjt but does not let me know which one is which:\nfrom ultralytics import YOLO\nmodel = YOLO(\"path/to/best.pt\")\nresult = model.predict(os.path.join(cut_dir, im_name), save_conf=True, show=True)\nif result[0].masks is not None:\n for counter, detection in enumerate(result[0].masks.data):\n detected = np.asarray(detection.cpu())"} +{"id": "000653", "text": "I want to close the detection window on a key press and not have to stop the entire code. I am able to do it for my image detection but since video runs on a loop I can't find how.\n root.video = filedialog.askopenfilename(initialdir=\"/desktop\", title=\"Select video to detect\")\n cap = cv2.VideoCapture(root.video)\n\n model = YOLO('best.pt')\n\n classNames = ['Orange', 'Orange', 'Pomegranate', 'Pomegranate', 'apple', 'apple', 'banana', 'banana',\n 'fruits', 'guava', 'guava', 'guava', 'lime', 'lime']\n while True:\n success, vid = cap.read()\n results = model(img, stream=True)\n\n cv2.imshow(\"DetectVideo\", vid)\n cv2.waitKey(1)\n\nI have tried the following\nk = cv2.waitKey(0) #also tried with (1)\nprint(k)\nif k == 27: # close on ESC key\n cv2.destroyAllWindows()\n\nwhich basically makes it so it just closes each frame of the video then continues to next frame on key press."} +{"id": "000654", "text": "def extract_and_process_tracks(self, tracks):\n boxes = tracks[0].boxes.xyxy.cpu()\n clss = tracks[0].boxes.cls.cpu().tolist()\n track_ids = tracks[0].boxes.id.int().cpu().tolist()\n\n self.annotator = Annotator(self.im0, self.tf, self.names)\n self.annotator.draw_region(reg_pts=self.reg_pts, color=(0, 255, 0))\n\n for box, track_id, cls in zip(boxes, track_ids, clss):\n self.annotator.box_label(box, label=self.names[cls], color=colors(int(cls), True)) \n\n # Draw Tracks\n track_line = self.track_history[track_id]\n track_line.append((float((box[0] + box[2]) / 2), float((box[0] + box[2]) / 2))\n track_line.pop(0) if len(track_line) > 30 else None\n\n if self.draw_tracks:\n self.annotator.draw_centroid_and_tracks(track_line,\n color=(0, 255, 0),\n track_thickness=self.track_thickness)\n\nThe object_counter.py provided by ultralytics can achieve the counting job, the track_line.append store the center of the\nbox(float((box[0] + box[2]) / 2), float((box[0] + box[2]) / 2),\nbut how to change the center into the keypoints coordinate, e.g. I want to count the head keypoints of animal cross a specified line.\nHow to get the keypoints coordinate in yolo_pose?"} +{"id": "000655", "text": "I have my own pre-trained YOLO model, and I want to have different colors of LEDs light up on Arduino Uno breadbroad when different labels are being detected from my webcam.\nSo, I was thinking of first assigning the command for different LEDs to light up. (\"G\" for green led, \"R\" for red led, \"Y\" for yellow led, \"A\" for turning both red and yellow led at the same time, and \"0\" to turn off all led.\n)\nI started with my Arduino code as below:\nchar command;\n\nvoid setup() {\n Serial.begin(9600);\n pinMode(2, OUTPUT); // Green LED pin\n pinMode(3, OUTPUT); // Red LED pin\n pinMode(4, OUTPUT); // Yellow LED pin\n}\n\nvoid loop() {\n if (Serial.available() > 0) {\n command = Serial.read();\n if (command == 'G') {\n // Turn on green LED\n digitalWrite(2, HIGH);\n digitalWrite(3, LOW); // Turn off red LED\n digitalWrite(4, LOW); // Turn off yellow LED\n } else if (command == 'R') {\n // Turn on red LED\n digitalWrite(2, LOW); // Turn off green LED\n digitalWrite(3, HIGH);\n digitalWrite(4, LOW); // Turn off yellow LED\n } else if (command == 'Y') {\n // Turn on yellow LED\n digitalWrite(2, LOW); // Turn off green LED\n digitalWrite(3, LOW); // Turn off red LED\n digitalWrite(4, HIGH);\n } else if (command == 'A') {\n // Turn on both red and yellow LEDs\n digitalWrite(2, LOW); // Turn off green LED\n digitalWrite(3, HIGH);\n digitalWrite(4, HIGH);\n } else if (command == '0') {\n // Turn off all LEDs\n digitalWrite(2, LOW);\n digitalWrite(3, LOW);\n digitalWrite(4, LOW);\n }\n }\n}\n\n\nIn my YOLO model, there are three classes\n#Classes\nnames:\n 0: aaa\n 1: bbb\n 2: ccc\n\nTherefore, for my python code I started with first turning on the webcam and loading then using serial library to connect to arduino.\nfrom ultralytics import YOLO\nimport serial\nimport time\nimport sys\n\n# Load my YOLO model using\nmodel = YOLO(\"best.pt\")\n\n# Open the serial port for communication with Arduino\narduino = serial.Serial('COM3', 9600) # Change 'COM3' to your Arduino port\n\nafter that, I want to assign the labels with command to Arduino, and I also try to do a little\n# Map the class labels to corresponding Arduino commands\nlabel_commands = {\n 'aaa': 'G',\n 'bbb': 'R',\n 'ccc': 'Y',\n 'aaa_bbb': 'A',\n 'aaa_bbb_ccc': 'A',\n 'none': '0'\n}\n\nwhile True:\n try:\n results = model.predict(source=\"0\", show=True) # assumes '0' is your source identifier\n\n # Extract the detected labels from results\n detected_labels = [item['label'] for item in results.xyxy[0].numpy()]\n\n # Determine the Arduino command based on detected labels\n if 'aaa' in detected_labels and 'bbb' in detected_labels and 'ccc' in detected_labels:\n command = label_commands['aaa_bbb_ccc']\n elif 'aaa' in detected_labels and 'bbb' in detected_labels:\n command = label_commands['aaa_bbb']\n elif 'aaa' in detected_labels:\n command = label_commands['aaa']\n elif 'bbb' in detected_labels and 'ccc' in detected_labels:\n command = label_commands['aaa_bbb_ccc']\n elif 'bbb' in detected_labels:\n command = label_commands['bbb']\n elif 'ccc' in detected_labels:\n command = label_commands['ccc']\n else:\n command = label_commands['none']\n\n # Print the command for debugging\n print(f\"Sending command to Arduino: {command}\")\n sys.stdout.flush() # Flush the standard output\n\n # Send the command to Arduino\n arduino.write(command.encode())\n\n except Exception as e:\n print(f\"Error: {e}\")\n sys.stdout.flush() # Flush the standard output\n\n # Wait for 5 seconds before sending the next command\n time.sleep(5)\n\nBUT, this part of code doesn't work at all. I know my arduino is connected for sure, and I have also tested with from randoming sending the letter command to arduino from python, which worked perfectly.\nAlso when I run this code, the run window are showing only things like:\n1/1: 0... Success \u2705 (inf frames of shape 640x480 at 25.00 FPS)\n\n\nWARNING \u26a0\ufe0f inference results will accumulate in RAM unless `stream=True` is passed, causing potential out-of-memory\nerrors for large sources or long-running streams and videos. See https://docs.ultralytics.com/modes/predict/ for help.\n\nExample:\n results = model(source=..., stream=True) # generator of Results objects\n for r in results:\n boxes = r.boxes # Boxes object for bbox outputs\n masks = r.masks # Masks object for segment masks outputs\n probs = r.probs # Class probabilities for classification outputs\n\n0: 480x640 (no detections), 141.0ms\n0: 480x640 1 aaa, 182.2ms\n0: 480x640 1 aaa, 134.3ms\n0: 480x640 1 aaa, 122.1ms\n0: 480x640 (no detections), 121.4ms\n0: 480x640 (no detections), 147.5ms\n0: 480x640 (no detections), 131.5ms\n0: 480x640 (no detections), 140.3ms\n0: 480x640 1 aaa, 127.1ms\n0: 480x640 1 aaa, 124.1ms\n0: 480x640 1 aaa, 123.2ms\n0: 480x640 1 aaa, 172.2ms\n...\n\neven with my debugging script, it doesn't tell me what command is being sent to the arduino broad\n # Print the command for debugging\n print(f\"Sending command to Arduino: {command}\")\n sys.stdout.flush() # Flush the standard output"} +{"id": "000656", "text": "can you please help me.........\nI have my custom trained model (best.pt), it detects two things person and headlight. Now I want the output according to these conditions: 1. If model detect only headlight return 0, 2. If model detect only person return 1, 3. If model detect headlight and person both return 0.\nimport cv2\nfrom ultralytics import YOLO\n\nvideo_path = 'data/video1.mp4'\nvideo_out_path = 'out.mp4'\n\ncap = cv2.VideoCapture(video_path)\n\n# Check if the video file is opened successfully\nif not cap.isOpened():\n print(\"Error: Could not open the video file.\")\n exit()\n\nret, frame = cap.read()\n\n# Check if the first frame is read successfully\nif not ret:\n print(\"Error: Could not read the first frame from the video.\")\n exit()\n\ncap_out = cv2.VideoWriter(video_out_path, cv2.VideoWriter_fourcc(*'MP4V'), cap.get(cv2.CAP_PROP_FPS),\n (int(cap.get(3)), int(cap.get(4)))) # Use cap.get(3) and cap.get(4) for width and height\n\nmodel = YOLO(\"bestall5.pt\")\n\ndetection_threshold = 0.5\nwhile ret:\n results_list = model(frame)\n\n headlight_detected = False\n person_detected = False\n\n # Iterate through the list of results\n for results in results_list:\n # Check if the current result has the necessary attributes\n if hasattr(results, 'xyxy'):\n for result in results.xyxy:\n x1, y1, x2, y2, score, class_id = result.tolist()\n x1, x2, y1, y2 = int(x1), int(x2), int(y1), int(y2)\n\n # Assuming class_id is the index of the class in the model's class list\n class_name = model.names[class_id]\n\n if class_name == \"headlight\" and score > detection_threshold:\n headlight_detected = True\n elif class_name == \"person\" and score > detection_threshold:\n person_detected = True\n\n # Output based on the specified conditions\n if headlight_detected and person_detected:\n output = 0\n elif headlight_detected:\n output = 0\n elif person_detected:\n output = 1\n else:\n output = -1 # No person or headlight detected\n\n print(\"Output:\", output)\n\n cap_out.write(frame)\n\n cv2.imshow('Object Detection', frame)\n \n # Break the loop if 'q' key is pressed\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n ret, frame = cap.read()\n\ncap.release()\ncap_out.release()\ncv2.destroyAllWindows()\n\nI tried this but getting only -1 as output but my video has both headlight and person"} +{"id": "000657", "text": "I am currently working with YOLOv8 and I'm wondering if there is a method similar to results.pandas().xyxy available in YOLOv5 to obtain structured results in tabular form.\nWith YOLOv5, it's possible to get results easily using the following code:\nresults = model(img)\ndf = results.pandas().xyxy[0] # Get results in tabular format\nprint(df)\n# xmin ymin xmax ymax confidence class name\n# 0 749.50 43.50 1148.0 704.5 0.874023 0 person\n# 1 433.50 433.50 517.5 714.5 0.687988 27 tie\n# 2 114.75 195.75 1095.0 708.0 0.624512 0 person\n# 3 986.00 304.00 1028.0 420.0 0.286865 27 tie"} +{"id": "000658", "text": "When my nn starts learning, the kernel crashes\nThe Kernel crashed while executing code in the current cell or a previous cell. \n\nPlease review the code in the cell(s) to identify a possible cause of the failure. \n\nClick here for more info. \n\nView Jupyter log for further details.\n\nYolo doesn't use my GPU resources for learning instead it trying to use CPU and I think that it is a reason why kernel crashes.\nHow I can fix it?\nscreenshot\nI tried to install CUDA 7.5 (for my RTX 2060). I recently started to study neural networks and I am not sure that I need CUDA.\nI also checked similar topics here but I am not sure that there is my case."} +{"id": "000659", "text": "Let's say I have a folder called 'test' with folders inside, 'images' and 'labels'. I also have a YOLOv8 model which I've trained called 'best.pt. My labels are polygons (yolo-obb .txt files).\nI want to find the mean average precision (MAP) of my YOLOv8 model on this test set.\nI've read both the documentation for predicting and benchmarking, however, I'm struggling to find an example of calculating map from some test images.\nhttps://docs.ultralytics.com/modes/predict/\nhttps://docs.ultralytics.com/modes/benchmark/\nfrom ultralytics import YOLO\n\n# Load a pretrained YOLOv8n model\nmodel = YOLO('best.pt')\n\n# Run inference on an image\nresults = model(['test/images/bus.jpg', 'test/images/zidane.jpg']) # list of 2 Results objects\n\nI imagine I have to put the list of images in the above, then write code to calculate map for everything in the test folder and average it. Are there packages that have already done this?\nWhat's the code to achieve this task?"} +{"id": "000660", "text": "I have the following code\nfrom ultralytics import YOLO\nimport cv2\nimport math\nimport os \nimport time\n\n# Start webcam\ncap = cv2.VideoCapture(0)\ncap.set(3, 640)\ncap.set(4, 480)\n\n# Load custom model\nmodel_path = os.path.join('.', 'runs', 'detect', 'train', 'weights', 'last.pt')\nmodel = YOLO(model_path) # load a custom model\n\n# Define your custom object classes\nclassNames = [\"reese_pretzel\"] # Update with your custom classes\n\n# Confidence threshold\nconfidence_threshold = 0.5\n\n# Initialize variables for tracking time\nstart_time = None\nend_time = None\nobject_detected = False\n\nwhile True:\n success, img = cap.read()\n results = model(img, stream=True)\n\n # Process results\n for r in results:\n boxes = r.boxes\n\n for box in boxes:\n # Extract box coordinates\n x1, y1, x2, y2 = box.xyxy[0]\n x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)\n\n # Confidence\n confidence = math.ceil((box.conf[0]*100))/100\n\n # Check confidence threshold\n if confidence > confidence_threshold:\n # Class name\n cls = int(box.cls[0])\n class_name = classNames[cls]\n\n # Draw bounding box\n cv2.rectangle(img, (x1, y1), (x2, y2), (255, 0, 255), 3)\n\n # Object details\n org = [x1, y1]\n font = cv2.FONT_HERSHEY_SIMPLEX\n fontScale = 1\n color = (255, 0, 0)\n thickness = 2\n\n cv2.putText(img, class_name, org, font, fontScale, color, thickness)\n \n # Set start time when an object is first detected\n if not object_detected:\n start_time = time.time()\n object_detected = True\n else:\n # Reset start time when no object is detected\n start_time = None\n object_detected = False\n\n cv2.imshow('Webcam', img)\n if cv2.waitKey(1) == ord('q'):\n break\n\ncap.release()\ncv2.destroyAllWindows()\n\n\nI keep getting outputs to my console like\n0: 480x640 (no detections), 49.2ms\nSpeed: 0.9ms preprocess, 49.2ms inference, 0.2ms postprocess per image at shape (1, 3, 480, 640)\n\nor\n0: 384x640 1 reese_pretzel, 75.0ms\nSpeed: 4.8ms preprocess, 75.0ms inference, 0.3ms postprocess per image at shape (1, 3, 384, 640)\n\nWhere are these messages getting printed from?\nI tried looking at the YOLO model.py file but that too doesn't have any print statements in it. I want to do something like if there are no detections, print('no detect), and if something is detected, print the name of the object."} +{"id": "000661", "text": "When training a Yolo model like below:\nfrom ultralytics import YOLO\n\n# Load a model\nmodel = YOLO('yolov8s.pt') \n\nresults = model.train(data='coco128.yaml', \nepochs=100, imgsz=640, save_period=1)\n\nThe save_period option will save every epoch. When the best epoch is found the file is saved as best.pt.\nThe file size of best.pt is ~27MB and each epoch is ~120MB. Is it possible to use the compression applied to the best epoch at the end of training to every epoch (even if this is done after training)."} +{"id": "000662", "text": "I'm encountering a problem with creating space after uploading app.py and requirements.txt. For loading the model, I'm using this code: from ultralyticsplus import YOLO, render_result.\n(I use Gradio to create a website.)\nThe main task for uploading app.py to the space is that I want the HTML code to be embedded on the Google site.\nmodel_path = ('(my model path on huggingface')\nmodel = YOLO(model_path)\n\nIf I use another method to load the model instead of using YOLO, is it possible to fix this error?\nThe error said\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.10/site-packages/ultralyticsplus/ultralytics_utils.py\", line 59, in __init__\n self._load_from_hf_hub(model, hf_token=hf_token)\n File \"/usr/local/lib/python3.10/site-packages/ultralyticsplus/ultralytics_utils.py\", line 91, in _load_from_hf_hub\n ) = self._assign_ops_from_task()\n File \"/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1614, in __getattr__\n raise AttributeError(\"'{}' object has no attribute '{}'\".format(\nAttributeError: 'YOLO' object has no attribute '_assign_ops_from_task'\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/home/user/app/app.py\", line 95, in \n gr.Interface(fn=detect_objects,\n File \"/usr/local/lib/python3.10/site-packages/gradio/interface.py\", line 518, in __init__\n self.render_examples()\n File \"/usr/local/lib/python3.10/site-packages/gradio/interface.py\", line 851, in render_examples\n self.examples_handler = Examples(\n File \"/usr/local/lib/python3.10/site-packages/gradio/helpers.py\", line 71, in create_examples\n examples_obj.create()\n File \"/usr/local/lib/python3.10/site-packages/gradio/helpers.py\", line 298, in create\n client_utils.synchronize_async(self.cache)\n File \"/usr/local/lib/python3.10/site-packages/gradio_client/utils.py\", line 889, in synchronize_async\n return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore\n File \"/usr/local/lib/python3.10/site-packages/fsspec/asyn.py\", line 103, in sync\n raise return_result\n File \"/usr/local/lib/python3.10/site-packages/fsspec/asyn.py\", line 56, in _runner\n result[0] = await coro\n File \"/usr/local/lib/python3.10/site-packages/gradio/helpers.py\", line 360, in cache\n prediction = await Context.root_block.process_api(\n File \"/usr/local/lib/python3.10/site-packages/gradio/blocks.py\", line 1695, in process_api\n result = await self.call_function(\n File \"/usr/local/lib/python3.10/site-packages/gradio/blocks.py\", line 1235, in call_function\n prediction = await anyio.to_thread.run_sync(\n File \"/usr/local/lib/python3.10/site-packages/anyio/to_thread.py\", line 56, in run_sync\n return await get_async_backend().run_sync_in_worker_thread(\n File \"/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py\", line 2144, in run_sync_in_worker_thread\n return await future\n File \"/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py\", line 851, in run\n result = context.run(func, *args)\n File \"/usr/local/lib/python3.10/site-packages/gradio/utils.py\", line 692, in wrapper\n response = f(*args, **kwargs)\n File \"/home/user/app/app.py\", line 24, in detect_objects\n model = YOLO(model_path)\n File \"/usr/local/lib/python3.10/site-packages/ultralyticsplus/ultralytics_utils.py\", line 65, in __init__\n raise NotImplementedError(\nNotImplementedError: Unable to load model='MvitHYF/v8mvitcocoaseed2024'. As an example try model='yolov8n.pt' or model='yolov8n.yaml'\n\nI also ran app.py on VSCode, and everything ran perfectly (run on localhost). However, I encountered this error when trying to create the space. I tried adding yolov8n.pt to both the model and the space site, but nothing changed. At first, I thought it might fix the error.\nThank you for you help"} +{"id": "000663", "text": "I downloaded one of the pretrained yolo models from the link:\nhttps://github.com/WongKinYiu/yolov7/releases\nIn this case, yolov7-tiny.pt is downloaded.\nThen tried to run the code to load the model and convert it to onnx file:\nimport torch\nimport onnx\n\nmodel = torch.load('./yolo_custom/yolov7-tiny.pt')\ninput_shape = (1, 3, 640, 640)\ntorch.onnx.export(model, torch.randn(input_shape), 'yolov7-tiny.onnx', opset_version=11)\n\nAn error occurs on\nmodel = torch.load('./yolo_custom/yolov7-tiny.pt')\n\nand the error message is:\nModuleNotFoundError: No module named 'models'\n\nThe issue is reproducible even on Colab. Is there anything wrong on the steps?"} +{"id": "000664", "text": "# Veri k\u00fcmenizin yolunu ayarlay\u0131n\ndata_dir = '/content/datasets/u-granada-g-detect-2'\n\n# S\u0131n\u0131f etiketlerini y\u00fckleyin\nclass_names = ['knife', 'gun']\n\n# E\u011fitim ve do\u011frulama veri k\u00fcmelerini olu\u015fturun\ntrain_images = []\ntrain_targets = []\nval_images = []\nval_targets = []\n\nfor phase in ['train', 'valid']:\n image_dir = os.path.join(data_dir, phase, 'images')\n label_dir = os.path.join(data_dir, phase, 'labels')\n\n for image_path in tqdm(os.listdir(image_dir)):\n image = cv2.imread(os.path.join(image_dir, image_path))\n image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\n label_path = os.path.join(label_dir, image_path.replace('.jpg', '.txt'))\n with open(label_path, 'r') as f:\n labels = []\n for line in f:\n bbox = [float(x) for x in line.split()]\n labels.append([int(bbox[0]), int(bbox[1]), int(bbox[2]), int(bbox[3]), 1 if bbox[4] == 0 else 0])\n\n train_images.append(image)\n train_targets.append(labels)\n\n if phase == 'val':\n val_images.append(image)\n val_targets.append(labels)\n\n# Yeniden boyutland\u0131rma i\u015flemi\nresized_train_images = [cv2.resize(img, (224, 224)) for img in train_images]\nresized_val_images = [cv2.resize(img, (224, 224)) for img in val_images]\n\n# NumPy dizisine d\u00f6n\u00fc\u015ft\u00fcrme\ntrain_images = np.array(resized_train_images)\nval_images = np.array(resized_val_images)\n\n# Normalle\u015ftirme i\u015flemi\ntrain_images = train_images / 255.0\nval_images = val_images / 255.0\n\n# EfficientNet B4 modelini y\u00fckleyin\nefficientnet = EfficientNet.from_name('efficientnet-b4')\n\n# YOLOv8n modelini olu\u015fturun\nyolo_model = model.YOLOv8(cfg='yolov8x.yaml')\n\n# Modeli EfficientNet B4 omurgas\u0131 ile de\u011fi\u015ftirin\nyolo_model.model.backbone = efficientnet\n\n# E\u011fitim ve do\u011frulama veri k\u00fcmelerini ayarlay\u0131n\ntrain_dataset = torch.utils.data.Dataset(train_images, train_targets)\ntrain_loader = torch.utils.data.DataLoader(train_dataset, batch_size=16, shuffle=True)\nval_dataset = torch.utils.data.Dataset(val_images, val_targets)\nval_loader = torch.utils.data.DataLoader(val_dataset, batch_size=16, shuffle=False)\n\n# E\u011fitim ayarlar\u0131n\u0131 belirleyin\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(yolo_model.parameters(), lr=0.001)\n\n# Modeli e\u011fitin\nfor epoch in range(100):\n # Her epoch i\u00e7in e\u011fitim ve do\u011frulama a\u015famalar\u0131\n for images, targets in tqdm(train_loader):\n images = images.to(device)\n targets = [target.to(device) for target in targets]\n\n # Tahminleri hesaplay\u0131n ve kayb\u0131 hesaplay\u0131n\n outputs = yolo_model(images)\n loss = criterion(outputs[0], targets[0])\n\n # Geri yay\u0131l\u0131m ve parametreleri g\u00fcncelleyin\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n # Do\u011frulama setinde performans\u0131 de\u011ferlendirin\n with torch.no_grad():\n total_correct = 0\n total_samples = 0\n for images, targets in tqdm(val_loader):\n images = images.to(device)\n targets = [target.to(device) for target in targets]\n\n # Tahminleri hesaplay\u0131n\n outputs = yolo_model(images)\n predictions = torch.argmax(outputs[0], dim=1)\n\n # Do\u011fru tahminleri say\u0131n\n for i, (prediction, target) in enumerate(zip(predictions, targets)):\n total_samples += 1\n if prediction == target:\n total_correct += 1\n\n # Do\u011frulama do\u011frulu\u011funu hesaplay\u0131n\n accuracy = total_correct / total_samples\n print(f'Epoch {epoch + 1}: Val Accuracy: {accuracy:.4f}')\n\n# E\u011fitim tamamland\u0131ktan sonra modeli kaydedin\ntorch.save(yolo_model, 'yolov8_knife_gun_detector.pth')\n\nprint('Model successfully trained and saved!')\n\nHow can i fix this error?\nAttributeError Traceback (most recent call last)\n in ()\n 3 \n 4 # YOLOv8n modelini olu\u015fturun\n----> 5 yolo_model = model.YOLOv8(cfg='yolov8x.yaml')\n 6 \n 7 # Modeli EfficientNet B4 omurgas\u0131 ile de\u011fi\u015ftirin\n\n/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)\n 1686 if name in modules:\n 1687 return modules[name]\n-> 1688 raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\n 1689 \n 1690 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:\n\nAttributeError: 'YOLO' object has no attribute 'YOLOv8'"} +{"id": "000665", "text": "I work with YOLO8\nMy task is to determine the contours of the cars (that is, display them as rectangles), then make a count (for counting: the contour must cross the middle of the frame\nfor two lanes of different traffic: oncoming and passing\nAnd at the end, count the total number of cars\nLoop and request the next frame each time and work should be done with it separately\nAt the moment, I'm trying to draw the contours, but there's nothing on the video.\nIn the output, I get something like this:\n`0: 384x640 27 cars, 1 truck, 1 tv, 93.0ms\nSpeed: 1.9ms preprocess, 93.0ms inference, 0.4ms postprocess per image at shape (1, 3, 384, 640)\n0: 384x640 26 cars, 2 trucks, 88.1ms\nSpeed: 1.7ms preprocess, 88.1ms inference, 0.5ms postprocess per image at shape (1, 3, 384, 640)\n0: 384x640 26 cars, 2 trucks, 104.2ms\nSpeed: 1.5ms preprocess, 104.2ms inference, 0.6ms postprocess per image at shape (1, 3, 384, 640)`\nMy code:\nimport cv2\nfrom ultralytics import YOLO\n\nmodel = YOLO('yolov8s.pt')\n\nvideo_path = 'output2.avi'\nvideo = cv2.VideoCapture(video_path)\n\nif not video.isOpened():\n print(\"\u041e\u0448\u0438\u0431\u043a\u0430: \u041d\u0435 \u0443\u0434\u0430\u0435\u0442\u0441\u044f \u043e\u0442\u043a\u0440\u044b\u0442\u044c \u0432\u0438\u0434\u0435\u043e.\")\n exit()\n\ncurrent_frame = 0\n\nwhile True:\n ret, frame = video.read()\n if not ret:\n break\n\n current_frame += 1\n\n results = model(frame)\n\n car_results = [detection for detection in results if detection[-1] in [2, 5, 7]]\n\n for result in car_results:\n x1, y1, x2, y2, confidence, class_id = result.xyxy[0]\n x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)\n cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 2)\n\n cv2.imshow('Video', frame)\n\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\nvideo.release()\ncv2.destroyAllWindows()\n\n\nI have tried various ways to fix the problem, but all attempts are unsuccessful"} +{"id": "000666", "text": "i want to use default YOLOv8 model (yolov8m.pt) for object detection. I know that default YOLO models uses COCO dataset and can detect 80+ objects. I just want to detect like 5 of them, how can i achieve this?"} +{"id": "000667", "text": "When I try to train my model by executing the code cell below:\npython train.py --img-size 2048 --cfg cfg/training/yolov7.yaml --hyp data/road_sign_data.yaml --batch 8 --epochs 100 --data data/road_sign.yaml --weights yolov7_training.pt --workers 24 --name yolo_road_det\nI have the following error message :\nTraceback (most recent call last):\nFile \"C:\\Users\\531558\\Documents\\streamline2\\yolov7\\train.py\", line 12, in \nimport torch.distributed as dist\nFile \"C:\\Users\\531558\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\torch_init_.py\", line 141, in \nraise err\nOSError: [WinError 126] The specified module could not be found. Error loading \"C:\\Users\\531558\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python311\\site-packages\\torch\\lib\\shm.dll\" or one of its dependencies.\nIt looks like it can not import torch.distibuted\nI tried to change the version of python I am using (from 3.12 to 3.11.9) but it still does not work. I also tried many other way to do the training of a yolov7 model but none of them were working... If you have any solution it would be very helpful"} +{"id": "000668", "text": "I'm encountering an issue with my YOLO model.\nInitially, I trained it with 7 classes. Now, I want to add 4 new classes to the model. However, when I combine the data for the original 7 classes with the new 4 classes, the training time and associated cloud costs significantly increase. What's a good solution to efficiently incorporate these additional classes into the model without inflating training time and costs?\nMy expecting is reduce the cost and training time in incremental learnng."} +{"id": "000669", "text": "i trained a model yolov8 with custom dataset containing 26 classess, but when i convert the model to tflite i noticed that it gives as output [1,30,8400] and this is what caused me errors when using my model with flutter.\nthe error\nE/AndroidRuntime(18479): Caused by: java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (Identity) with shape [1, 30, 8400] to a Java object with shape [1, 26].\nhow can i modify the output shape of my model ?\nthis is how trained my model :\n from ultralytics import YOLO\n\n model = YOLO('yolov8s.pt')\n\n results = model.train(data='/kaggle/input/my- \n dataset/my_dataset/data.yaml', epochs=100, imgsz=640)\n\nand this is the content of file the data.yaml\ntrain: /kaggle/input/dataset-asl/ASL/train/images\nval: /kaggle/input/dataset-asl/ASL/valid/images\n\nnc: 26\nnames: ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', \n'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', \n'Y', 'Z']\n\nand this is how convert my model to format tflite :\nfrom ultralytics import YOLO\n\n# Load a model\nmodel = YOLO('best.pt')\n\n# Export the model\nmodel.export(format='tflite')\n\nand this is output after convert :\nUltralytics YOLOv8.2.4 Python-3.9.0 torch-2.2.1+cpu CPU (Intel \nCore(TM) i5-7300U 2.60GHz)\nModel summary (fused): 168 layers, 3010718 parameters, 0 \ngradients, 8.1 GFLOPs\n\nPyTorch: starting from 'best.pt' with input shape (1, 3, 640, 640) \nBCHW and output shape(s) (1, 30, 8400) (6.0 MB)\n\nTensorFlow SavedModel: starting export with tensorflow 2.15.0...\nWARNING tensorflow<=2.13.1 is required, but tensorflow==2.15.0 is \ncurrently installed \nhttps://github.com/ultralytics/ultralytics/issues/5161\n\nONNX: starting export with onnx 1.15.0 opset 17...\nONNX: simplifying with onnxsim 0.4.36...\nONNX: export success 1.8s, saved as 'best.onnx' (11.7 MB)\nTensorFlow SavedModel: starting TFLite export with onnx2tf \n1.17.5...\nTensorFlow SavedModel: export success 14.3s, saved as \n'best_saved_model' (29.5 MB)\n\nTensorFlow Lite: starting export with tensorflow 2.15.0...\nTensorFlow Lite: export success 0.0s, saved as \n'best_saved_model\\best_float32.tflite' (11.7 MB)\n\nExport complete (16.8s)\nResults saved to C:\\Users\\Bachir\\Desktop\\api\\tflite\nPredict: yolo predict task=detect \nmodel=best_saved_model\\best_float32.tflite imgsz=640 \nValidate: yolo val task=detect \nmodel=best_saved_model\\best_float32.tflite imgsz=640 \ndata=/kaggle/input/my-dataset/my_dataset/data.yaml \nVisualize: https://netron.app\n'best_saved_model\\\\best_float32.tflite'\nemphasized text"} +{"id": "000670", "text": "I've trained a YOLOV8 model to identify objects in an intersection (ie cars, roads etc).\nIt is working OK and I can get the output as an image with the objects of interested segmented.\nHowever, what I need to do is to capture the raw geometries (polygons) so I can save them on a txt file later on.\nI tried what Ive found in the documentation (https://docs.ultralytics.com/modes/predict/#key-features-of-predict-mode) however the returning object is not the same as the documentation says.\nIn fact, the result is a list of tensorflow numbers:\n\nHere's my code:\nimport argparse\nimport cv2\nimport numpy as np\nfrom pathlib import Path\nfrom ultralytics.yolo.engine.model import YOLO \n \n# Parse command line arguments\nparser = argparse.ArgumentParser()\nparser.add_argument('--source', type=str, required=True, help='Source image directory or file')\nparser.add_argument('--output', type=str, default='output', help='Output directory')\nargs = parser.parse_args()\n\n# Create output directory if it doesn't exist\nPath(args.output).mkdir(parents=True, exist_ok=True)\n\n# Model path\nmodel_path = r'C:\\\\_Projects\\\\best_100img.pt'\n\n# Load your model directly\nmodel = YOLO(model_path)\nmodel.fuse()\n\n# Load image(s)\nif Path(args.source).is_dir():\n image_paths = list(Path(args.source).rglob('*.tiff'))\nelse:\n image_paths = [args.source]\n\n# Process each image\nfor image_path in image_paths:\n img = cv2.imread(str(image_path))\n if img is None:\n continue\n\n # Perform inference\n predictions = model.predict(image_path, save=True, save_txt=True)\n \nprint(\"Processing complete.\")\n\nHere's the problem: the return object (predictions variable) has no boxes, masks, keypoints and etc.\nI guess my questions are:\n\nWhy the result is so different from the documentation?\nIs there a conversion step?"} +{"id": "000671", "text": "I trained a YOLO-V8 instance segmentation model to segment an object with class label 0. I used the CLI to instantiate the trained model and predict on the test data.\n!yolo task=segment mode=predict model='/weights/best.pt' conf=0.25 source='/test/images' imgsz=1024 save=True save_txt=True save_conf=True\n\nAfter prediction, the label files gets stored in .txt format. These label files contain the class index followed by the polygonal coordinates and finally the confidence score of the bounding box predictions. But, bounding box coordinates, that is, x-center, y-center, width, height are not included in the label file. I would also like to include these bounding box coordinates to each of the labels file since I would like to use these bounding box coordinates later for post-processing. A sample label file content looks like this:\n0 0.21582 0.0898438 0.214844 0.0908203 0.213867 0.0908203 0.210938 0.09375 0.210938 0.0947266 0.203125 0.102539 0.203125 0.103516 0.201172 0.105469 0.200195 0.105469 0.199219 0.106445 0.199219 0.113281 0.200195 0.114258 0.200195 0.115234 0.203125 0.115234 0.204102 0.116211 0.223633 0.116211 0.224609 0.117188 0.227539 0.117188 0.228516 0.118164 0.230469 0.118164 0.231445 0.119141 0.234375 0.119141 0.235352 0.120117 0.248047 0.120117 0.249023 0.121094 0.251953 0.121094 0.25293 0.12207 0.254883 0.0927734 0.260742 0.0917969 0.256836 0.0917969 0.255859 0.0908203 0.233398 0.0908203 0.232422 0.0898438 0.910849\n\nI am not saving the predictions to any 'result' variable here and I am running the predictions only in the CLI."}