id
stringlengths
40
40
text
stringlengths
29
2.03k
original_text
stringlengths
3
154k
subdomain
stringclasses
20 values
metadata
dict
ae0721096f55ed1b252498642b943cc7f5e9eafc
Stackoverflow Stackexchange Q: Mono Cecil add missing assembly For some reason when I try to load some assembly and analyze it, I'm getting error Mono.Cecil.AssemblyResolutionException: Failed to resolve assembly... I don't really care about why this exception throw, I know where is the missing assembly. There is a way to some something like AppDomain.CurrentDomain.AssemblyResolve event but for Mono.Cecil? I can manually load the missing assembly but I don't know how. So, how can I load an assembly for Mono.Cecil? A: Apparently Mono.Cecil support that. When you load the Assembly with AssemblyDefinition.ReadAssembly inside the ReadParameters you can change the AssemblyResolver property to your resolver. To create a resolver just inherit from BaseAssemblyResolver like: private class CustomResolver : BaseAssemblyResolver { private DefaultAssemblyResolver _defaultResolver; public CustomResolver() { _defaultResolver = new DefaultAssemblyResolver(); } public override AssemblyDefinition Resolve(AssemblyNameReference name) { AssemblyDefinition assembly; try { assembly = _defaultResolver.Resolve(name); } catch (AssemblyResolutionException ex) { assembly = ...; // Your resolve logic } return assembly; } }
Q: Mono Cecil add missing assembly For some reason when I try to load some assembly and analyze it, I'm getting error Mono.Cecil.AssemblyResolutionException: Failed to resolve assembly... I don't really care about why this exception throw, I know where is the missing assembly. There is a way to some something like AppDomain.CurrentDomain.AssemblyResolve event but for Mono.Cecil? I can manually load the missing assembly but I don't know how. So, how can I load an assembly for Mono.Cecil? A: Apparently Mono.Cecil support that. When you load the Assembly with AssemblyDefinition.ReadAssembly inside the ReadParameters you can change the AssemblyResolver property to your resolver. To create a resolver just inherit from BaseAssemblyResolver like: private class CustomResolver : BaseAssemblyResolver { private DefaultAssemblyResolver _defaultResolver; public CustomResolver() { _defaultResolver = new DefaultAssemblyResolver(); } public override AssemblyDefinition Resolve(AssemblyNameReference name) { AssemblyDefinition assembly; try { assembly = _defaultResolver.Resolve(name); } catch (AssemblyResolutionException ex) { assembly = ...; // Your resolve logic } return assembly; } } A: In my case I was missing, while working on a .NET Core 3.1 project. Install-Package Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator -Version 4.0.1 https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator/
stackoverflow
{ "language": "en", "length": 177, "provenance": "stackexchange_0000F.jsonl.gz:859101", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524414" }
bfa843358787d8e8deef8bab798115ebf84e27dc
Stackoverflow Stackexchange Q: Failed to resolve: com.google.firebase:firebase-core:11.0.0 I've seen this question asked and replies along the lines of re-syncing the project and updating the SDK. I am right at the beginning of a new project and trying to set up authentication with firebase and I'm getting the following errors Does android studio just hate me? Error:Failed to resolve: com.google.firebase:firebase-core:11.0.0 Open FileShow in Project Structure dialog Error:Error:line (29)Failed to resolve: com.google.firebase:firebase-auth:11.0.0 Show in FileShow in Project Structure dialog A: Just goto sdk manager and update google repository.
Q: Failed to resolve: com.google.firebase:firebase-core:11.0.0 I've seen this question asked and replies along the lines of re-syncing the project and updating the SDK. I am right at the beginning of a new project and trying to set up authentication with firebase and I'm getting the following errors Does android studio just hate me? Error:Failed to resolve: com.google.firebase:firebase-core:11.0.0 Open FileShow in Project Structure dialog Error:Error:line (29)Failed to resolve: com.google.firebase:firebase-auth:11.0.0 Show in FileShow in Project Structure dialog A: Just goto sdk manager and update google repository. A: Go to Tools > Android > SDK Manager click on SDK Tools and update the following: * *Google Repository *Android SDK Platform-Tools A: enter image description here you can only click on AddAnalytics to your app follow this https://firebase.google.com/docs/android/setup A: I was able to resolve this by using 10.0.0 Thanks all for your help Nick A: for android studio 4.1.2. * *first make sure you have installed google play services from SDK manager. Go to Tools on the menu of android studio - SDK manager - SDK Tools if its not installed install it. *connect your app to firebase. On Tools click on firebase, firebase will open a new window, click on analytics. it will direct you to firebase site, from there create a project, it will automatically select your project. 3.when you go back to your project you will see "connected to firebase" 4.so if you run your project and get the error- could not find com. google. firebase 19.3.1 paste this to your project/ build.gradle classpath 'com.google.gms:google-services:4.3.5' on the dependencies- DONT FORGET TO SYNC NOW Then on the app/build.gradle at the bottom/end of the file paste this apply plugin: 'com.google.gms.google-services' 5.Rebuild your project and the run again. NB/classpath 'com.google.gms:google-services:4.3.5' apply plugin: 'com.google.gms.google-services' if it was helpful leave a comment. A: lines order is very importance for the Gradle. 1) buildscript { ext.kotlin_version = '1.2.20' repositories { maven { url "https://maven.google.com" } maven { url 'https://plugins.gradle.org/m2/'} } dependencies { classpath 'com.android.tools.build:gradle:2.3.3' classpath 'com.google.gms:google-services:3.0.0'//3.2.0 classpath 'com.google.firebase:firebase-plugins:1.0.4' classpath 'gradle.plugin.com.onesignal:onesignal-gradle-plugin:0.8.1' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } } and 2) dependencies { compile project(':libraries') compile fileTree(include: ['*.jar'], dir: 'libs') compile "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version" //--------------------------------- One Signal------------------- //compile 'com.onesignal:OneSignal:[3.1.1, 3.3.99]' compile 'com.google.android.gms:play-services-gcm:11.0.4' compile 'com.google.android.gms:play-services-location:11.0.4' compile 'com.google.firebase:firebase-core:11.0.4' compile 'com.google.firebase:firebase-messaging:11.0.4' //------------------------------------------------------------- compile 'com.android.support:multidex:1.0.1' compile 'com.android.support:percent:26.0.0-alpha1' compile 'com.android.support:design:26.0.0-alpha1' compile 'com.android.support:support-v4:26.0.0-alpha1' compile 'com.android.support:appcompat-v7:26.0.0-alpha1' compile 'com.android.support:recyclerview-v7:26.0.0-alpha1' compile 'com.android.support:cardview-v7:26.0.0-alpha1' compile 'com.android.support.constraint:constraint-layout:1.0.2' } apply plugin: 'com.google.gms.google-services' good luck A: Make sure that your project level build.gradle file have these: allprojects { repositories { jcenter() mavenLocal() maven { url 'https://maven.google.com' } } }
stackoverflow
{ "language": "en", "length": 411, "provenance": "stackexchange_0000F.jsonl.gz:859129", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524506" }
ea5428c9af9df0f49e08a14ebab825e2b78670d4
Stackoverflow Stackexchange Q: Grep or the like: overlapping matches For: echo "the quick brown fox" | grep -Po '[a-z]+ [a-z]+' I get: the quick brown fox but I wanted: the quick quick brown brown fox How? A: with awk: awk '{for(i=1;i<NF;i++) print $i,$(i+1)}' <<<"the quick brown fox" update: with python: #!/usr/bin/python3.5 import re s="the quick brown fox" matches = re.finditer(r'(?=(\b[a-z]+\b \b[a-z]+\b))',s) ans=[i.group(1) for i in matches] print(ans) #or not print for i in ans: print(i) output: ['the quick', 'quick brown', 'brown fox'] the quick quick brown brown fox
Q: Grep or the like: overlapping matches For: echo "the quick brown fox" | grep -Po '[a-z]+ [a-z]+' I get: the quick brown fox but I wanted: the quick quick brown brown fox How? A: with awk: awk '{for(i=1;i<NF;i++) print $i,$(i+1)}' <<<"the quick brown fox" update: with python: #!/usr/bin/python3.5 import re s="the quick brown fox" matches = re.finditer(r'(?=(\b[a-z]+\b \b[a-z]+\b))',s) ans=[i.group(1) for i in matches] print(ans) #or not print for i in ans: print(i) output: ['the quick', 'quick brown', 'brown fox'] the quick quick brown brown fox A: Simply reusing the original solution to get the markov chain: echo "the quick brown fox" | grep -Po '[a-z]+ [a-z]+' echo "the quick brown fox" | sed 's/^[a-z]* //' | grep -Po '[a-z]+ [a-z]+' The second line (namely sed) removes the first word of the input. Therefore, rest of the command generates the missing pairs. The same approach could also be generalized using sed's ability to run loops: echo pattern1pattern2 | sed ':start;s/\(pattern1\)\(pattern2\)/<\1|\2>\2/;t start' | grep -o '<[^>]*>' | tr -d '<>|' This solution will work with partially overlapping patterns where pattern2 can be overlapped by next match. It assumes <>| to be reserved auxiliary characters. Furthermore it assumes that the pattern1pattern2 regex cannot match any string that is matched by pattern2 alone. The sed substitues pattern1pattern2 with <pattern1|pattern2>pattern2 and repeats this substitution as long as any matches are found (the branching t command allows matching previously substituted strings, unlike the g option). I.e., in every iteration, one <pattern1|pattern2> group is left behind indicating our matches, while an instance of pattern2 can still be matched within next match. Finally, we pick the groups using the original approach and strip the auxiliary marks. A: another awk: awk '{print $1,$2 RS $2,$3 RS $3,$4}' <<<"the quick brown fox" the quick quick brown brown fox
stackoverflow
{ "language": "en", "length": 299, "provenance": "stackexchange_0000F.jsonl.gz:859132", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524514" }
b3d8b143fc546ee5be2d611333579bac00cc35a5
Stackoverflow Stackexchange Q: How do I downgrade from Swift 4 to Swift 3? I just upgraded my code from Swift 3 to Swift 4. Later, I changed my mind and wanted to use my code with iOS 10. Then I got this error: “Swift Language Version” (SWIFT_VERSION) is required to be configured correctly for targets which use Swift. Use the [Edit > Convert > To Current Swift Syntax…] menu to choose a Swift version or use the Build Settings editor to configure the build setting directly. I used the [Edit > Convert > To Current Swift Syntax…] menu to choose a Swift version but then it said 'No Filter Results'. Then, I tried to use Build Settings and changed 'SWIFT_VERSION' from 4.0 to 3.1 and also 3.0. However, the error persisted. Does anyone know a solution to this? Thanks in advance! A: Clean your project (CMD + Shift + K) and make sure the SWIFT_VERSION on every target is set to Swift 3 using Xcode 8.3.3
Q: How do I downgrade from Swift 4 to Swift 3? I just upgraded my code from Swift 3 to Swift 4. Later, I changed my mind and wanted to use my code with iOS 10. Then I got this error: “Swift Language Version” (SWIFT_VERSION) is required to be configured correctly for targets which use Swift. Use the [Edit > Convert > To Current Swift Syntax…] menu to choose a Swift version or use the Build Settings editor to configure the build setting directly. I used the [Edit > Convert > To Current Swift Syntax…] menu to choose a Swift version but then it said 'No Filter Results'. Then, I tried to use Build Settings and changed 'SWIFT_VERSION' from 4.0 to 3.1 and also 3.0. However, the error persisted. Does anyone know a solution to this? Thanks in advance! A: Clean your project (CMD + Shift + K) and make sure the SWIFT_VERSION on every target is set to Swift 3 using Xcode 8.3.3
stackoverflow
{ "language": "en", "length": 164, "provenance": "stackexchange_0000F.jsonl.gz:859138", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524531" }
6e7bd6a3acbfd81831ac1d2e650d51ca1b7b93b0
Stackoverflow Stackexchange Q: Angular : Relative navigation on a named router outlet I have the following Angular Route configuration const appRoutes: Routes = [ { path: '', component: DirectoryComponent }, { path: 'manage/:type/:id', component: ManageComponent, outlet: 'manage', children: [ { path: '', component: PreviewComponent, pathMatch: 'full' }, { path: 'add/:elementType', component: EditComponent, } ] } ]; ManageComponent is a sandbox component in which PreviewComponent and EditComponent will be rendered. An user use case redirect the user to http://localhost:4200/#/(manage:manage/bar/12762) which match the preview component. Everything is okay here. From the PreviewComponent and when the user clicks on a button, I want to make a relative navigation to the EditComponent to have, when the navigation finish, http://localhost:4200/#/(manage:manage/bar/12762/add/foo) I tried this.router.navigate([{outlets: {manage: ['add', 'foo']}}],{relativeTo: this.route}); and this.router.navigate([{outlets: {manage: ['add', 'foo']}}]); But every time, user is redirected to http://localhost:4200/#/add/foo. How can I please make this navigation ? A: I know that it's an old question, but I found a solution while I'm looking for an answer. You can use { relativeTo: this.activatedRoute.parent } and everything works like a charm.
Q: Angular : Relative navigation on a named router outlet I have the following Angular Route configuration const appRoutes: Routes = [ { path: '', component: DirectoryComponent }, { path: 'manage/:type/:id', component: ManageComponent, outlet: 'manage', children: [ { path: '', component: PreviewComponent, pathMatch: 'full' }, { path: 'add/:elementType', component: EditComponent, } ] } ]; ManageComponent is a sandbox component in which PreviewComponent and EditComponent will be rendered. An user use case redirect the user to http://localhost:4200/#/(manage:manage/bar/12762) which match the preview component. Everything is okay here. From the PreviewComponent and when the user clicks on a button, I want to make a relative navigation to the EditComponent to have, when the navigation finish, http://localhost:4200/#/(manage:manage/bar/12762/add/foo) I tried this.router.navigate([{outlets: {manage: ['add', 'foo']}}],{relativeTo: this.route}); and this.router.navigate([{outlets: {manage: ['add', 'foo']}}]); But every time, user is redirected to http://localhost:4200/#/add/foo. How can I please make this navigation ? A: I know that it's an old question, but I found a solution while I'm looking for an answer. You can use { relativeTo: this.activatedRoute.parent } and everything works like a charm.
stackoverflow
{ "language": "en", "length": 173, "provenance": "stackexchange_0000F.jsonl.gz:859153", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524570" }
669cd73261098b27ea60b3cabee8b2dade477157
Stackoverflow Stackexchange Q: FCM topic - Cannot subscribe to topic: xxx with token: (null) - iOS I'm getting this error from Firebase Messaging API: [Firebase/Messaging][I-FCM002010] Cannot subscribe to topic: /topics/testTopic with token: (null) But before: Messaging.messaging().subscribe(toTopic: "/topics/testTopic") I'm printing out the token like this: print("TOKEN: \(InstanceID.instanceID().token() ?? "NO TOKEN")") The result is: TOKEN:cXPhGQ_inE4:APA91bEKZF5depHmIm9gDliCFRCRcnJf5LYy5FMg6nhpWvKU3o3HEtr1WTBHUiCZXT4XzhVg2oqXzhtfrgf83brtLdqXii546644ciMPO80tri4JPueQBClKbaomEfoh54ku8E2lw So the token isn't null. Am I doing something wrong? Anyone some help? A: The problem was that I wanted to subscribe in didFinishLaunchingWithOptions but in that point not all services were set up. The solution was to subscribe in the delegate didRegisterUserNotificationSettings.
Q: FCM topic - Cannot subscribe to topic: xxx with token: (null) - iOS I'm getting this error from Firebase Messaging API: [Firebase/Messaging][I-FCM002010] Cannot subscribe to topic: /topics/testTopic with token: (null) But before: Messaging.messaging().subscribe(toTopic: "/topics/testTopic") I'm printing out the token like this: print("TOKEN: \(InstanceID.instanceID().token() ?? "NO TOKEN")") The result is: TOKEN:cXPhGQ_inE4:APA91bEKZF5depHmIm9gDliCFRCRcnJf5LYy5FMg6nhpWvKU3o3HEtr1WTBHUiCZXT4XzhVg2oqXzhtfrgf83brtLdqXii546644ciMPO80tri4JPueQBClKbaomEfoh54ku8E2lw So the token isn't null. Am I doing something wrong? Anyone some help? A: The problem was that I wanted to subscribe in didFinishLaunchingWithOptions but in that point not all services were set up. The solution was to subscribe in the delegate didRegisterUserNotificationSettings. A: In MessagingDelegate try it: func messaging(_ messaging: Messaging, didReceiveRegistrationToken fcmToken: String) { Messaging.messaging().subscribe(toTopic: "/topics/testTopic") } A: I had similar problem. The solution was to invoke FirebaseApp.configure() first: FirebaseApp.configure() Messaging.messaging().delegate = self instead of: Messaging.messaging().delegate = self // this brakes FCM FirebaseApp.configure() A: The most ideal place to resolve this issue is in the MessagingDelegate method didRefreshRegistrationToken. func messaging(_ messaging: Messaging, didRefreshRegistrationToken fcmToken: String) { // TODO: subscribe to topics here }
stackoverflow
{ "language": "en", "length": 166, "provenance": "stackexchange_0000F.jsonl.gz:859189", "question_score": "26", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524660" }
5aa981c5a43b4636406a1d7c83cd9f198645d649
Stackoverflow Stackexchange Q: how to set value in formArray I have got a formgroup named input form, having an formarray of topics. I want to change the value of an inputfield using a dynamically created filtered list. the filter function works fine: filtertopic(idx: number) { const test = this.topics.value[idx].topic; if (test !== "") { this.onderdeel = "topics"; this.queryselector = idx; this.filteredList = this.Topics.filter(function(el) { return el.toLowerCase().indexOf(test.toLowerCase()) >-1; }.bind(this)); } else { this.filteredList = []; } } but the function handleBlur to change the value in the input field does not work handleBlur() { console.log(this.selectedIdx); if (this.selectedIdx > -1) { if (this.onderdeel =="topics") { this.topics.value[this.queryselector].topic.setValue(this.filteredList[this.selectedIdx]); } else { this.query = this.filteredList[this.selectedIdx]; } } this.filteredList = []; this.selectedIdx = -1; } I think it has to do with the this.topics.value[this.queryselector].topic.setValue(this.filteredList[this.selectedIdx]); to set the formcontrol value. Does anybody know the solution? A: this.yourdataobject.forEach(task => { this.formcontrolname.push( this.fb.group({ name: [task.name, Validators.required] }) ); });
Q: how to set value in formArray I have got a formgroup named input form, having an formarray of topics. I want to change the value of an inputfield using a dynamically created filtered list. the filter function works fine: filtertopic(idx: number) { const test = this.topics.value[idx].topic; if (test !== "") { this.onderdeel = "topics"; this.queryselector = idx; this.filteredList = this.Topics.filter(function(el) { return el.toLowerCase().indexOf(test.toLowerCase()) >-1; }.bind(this)); } else { this.filteredList = []; } } but the function handleBlur to change the value in the input field does not work handleBlur() { console.log(this.selectedIdx); if (this.selectedIdx > -1) { if (this.onderdeel =="topics") { this.topics.value[this.queryselector].topic.setValue(this.filteredList[this.selectedIdx]); } else { this.query = this.filteredList[this.selectedIdx]; } } this.filteredList = []; this.selectedIdx = -1; } I think it has to do with the this.topics.value[this.queryselector].topic.setValue(this.filteredList[this.selectedIdx]); to set the formcontrol value. Does anybody know the solution? A: this.yourdataobject.forEach(task => { this.formcontrolname.push( this.fb.group({ name: [task.name, Validators.required] }) ); });
stackoverflow
{ "language": "en", "length": 148, "provenance": "stackexchange_0000F.jsonl.gz:859234", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524794" }
0d324fd723687c92ff5dd2488b6a27f881aa2580
Stackoverflow Stackexchange Q: Missing javaCompileTask for variant Trying to build something with Android Studio 3.0 that worked fine in a previous version. Now I am seeing: Error:Execution failed for task ':mobile-app:transformClassesWithRetrolambdaForDevDebug'. Missing javaCompileTask for variant: dev/debug/0 from output dir: /Users/myname/mycompany-android-app/MyProject/mobile-app/build/intermediates/transforms/retrolambda/dev/debug/0 I had a prior compile issue I got around by the adding the following to my module level build.gradle inside of defaultConfig: javaCompileOptions { annotationProcessorOptions { includeCompileClasspath false } } I can't find much of anything on "javaCompileTask". Maybe that relates to something else? A: I ended up commenting out the apply plugin for the retro lambda and that did it.
Q: Missing javaCompileTask for variant Trying to build something with Android Studio 3.0 that worked fine in a previous version. Now I am seeing: Error:Execution failed for task ':mobile-app:transformClassesWithRetrolambdaForDevDebug'. Missing javaCompileTask for variant: dev/debug/0 from output dir: /Users/myname/mycompany-android-app/MyProject/mobile-app/build/intermediates/transforms/retrolambda/dev/debug/0 I had a prior compile issue I got around by the adding the following to my module level build.gradle inside of defaultConfig: javaCompileOptions { annotationProcessorOptions { includeCompileClasspath false } } I can't find much of anything on "javaCompileTask". Maybe that relates to something else? A: I ended up commenting out the apply plugin for the retro lambda and that did it. A: I tried to use retrolambda version 3.6.1 with Android Gradle plugin 3.0.0-alpha5 and it does works. This is an issue with Android Gradle plugin 3.0.0-alpha* version. Reference: Does not currently work with the Retrolambda plugin. However, you should instead use the plugin's built-in support for Java 8 language features. Documented in the Known Issues section at https://developer.android.com/studio/preview/features/new-android-plugin.html A: I have same problem and refer to library https://github.com/evant/gradle-retrolambda I'm just add below line to dependencies classpath 'me.tatarka:gradle-retrolambda:3.7.0' And remove this plugins { id "me.tatarka.retrolambda" version "3.7.0" } A: search me.tatarka.retrolambda everywhere or its subarray and comment it in 2 files named build.gradle. this error was due to that you have install the update in gradle you will be able to run your android app.
stackoverflow
{ "language": "en", "length": 223, "provenance": "stackexchange_0000F.jsonl.gz:859244", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524829" }
c9cecbed26739aaf6f221656f36e5fb155466c88
Stackoverflow Stackexchange Q: Returning the first model from a hasMany relationship in Laravel Is it possible to create a quick method to return the first model from a one-to-many relationship? Here is my code, from the model file: public function books() { return $this->hasMany('App\Models\Book'); } public function first_book() { return $this->book()->first(); } This is the error I'm getting: Call to undefined method Illuminate\Database\Query\Builder::addEagerConstraints() The reason I want to use this is so that I can collect the first record using the with() method, for example: $authors = Author::with('first_book')->select('*'); I'm using these records with Datatables. A: To use with() your method has to return a collection from a relation method, because your relation is hasMany. So what you could do is: public function books() { return $this->hasMany('App\Models\Book'); } public function first_book() { return $this->hasMany('App\Models\Book')->limit(1); } Which would return a collection with your first item, so you' still have to call first(): $authors = Author::with('first_book')->select('*'); $authors->first_book->first();
Q: Returning the first model from a hasMany relationship in Laravel Is it possible to create a quick method to return the first model from a one-to-many relationship? Here is my code, from the model file: public function books() { return $this->hasMany('App\Models\Book'); } public function first_book() { return $this->book()->first(); } This is the error I'm getting: Call to undefined method Illuminate\Database\Query\Builder::addEagerConstraints() The reason I want to use this is so that I can collect the first record using the with() method, for example: $authors = Author::with('first_book')->select('*'); I'm using these records with Datatables. A: To use with() your method has to return a collection from a relation method, because your relation is hasMany. So what you could do is: public function books() { return $this->hasMany('App\Models\Book'); } public function first_book() { return $this->hasMany('App\Models\Book')->limit(1); } Which would return a collection with your first item, so you' still have to call first(): $authors = Author::with('first_book')->select('*'); $authors->first_book->first(); A: I might be late but for your future use and for other who want the same output try this one - // If you need the last one public function books() { return $this->hasOne('App\Models\Book')->latest(); } // If you need the first entry - public function books() { return $this->hasOne('App\Models\Book')->oldest(); } A: A one-to-one relationship is a very basic relation. For example public function books() { return $this->hasOne('App\Models\Book'); } A: With laravel 9.x you can use the latestOfMany or oldestOfMany like so; // your relationship public function books() { return $this->hasMany('App\Models\Book'); } // Get the first inserted child model public function first_book() { return $this->hasOne('App\Models\Book')->oldestOfMany(); } // Get the last inserted child model public function last_book() { return $this->hasOne('App\Models\Book')->latestOfMany(); } BONUS: If you are on php 5.5 or later, you can get the fully qualified class name by using the scope resolution operator, looks clean, ie; // your relationship public function books() { return $this->hasMany(Book::class); } // Get the first inserted child model public function first_book() { return $this->hasOne(Book::class)->oldestOfMany(); } // Get the last inserted child model public function last_book() { return $this->hasOne(Book::class)->latestOfMany(); } Link to the Laravel documentation A: A relation that can be eager loaded has to return a query. The first() function returns an eloquent object. The solution is to limit the number of results of this query like so: public function first_book() { return $this->books()->take(1); } $author->first_book will still be a collection, but it will only contain the first related book in your database.
stackoverflow
{ "language": "en", "length": 400, "provenance": "stackexchange_0000F.jsonl.gz:859265", "question_score": "27", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524888" }
4e2c034759187bc2a67201e8ac83722d2cdb031d
Stackoverflow Stackexchange Q: How do I multiply matrices in PyTorch? With numpy, I can do a simple matrix multiplication like this: a = numpy.ones((3, 2)) b = numpy.ones((2, 1)) result = a.dot(b) However, this does not work with PyTorch: a = torch.ones((3, 2)) b = torch.ones((2, 1)) result = torch.dot(a, b) This code throws the following error: RuntimeError: 1D tensors expected, but got 2D and 2D tensors How do I perform matrix multiplication in PyTorch? A: You can use "@" for computing a dot product between two tensors in pytorch. a = torch.tensor([[1,2], [3,4]]) b = torch.tensor([[5,6], [7,8]]) c = a@b #For dot product c d = a*b #For elementwise multiplication d
Q: How do I multiply matrices in PyTorch? With numpy, I can do a simple matrix multiplication like this: a = numpy.ones((3, 2)) b = numpy.ones((2, 1)) result = a.dot(b) However, this does not work with PyTorch: a = torch.ones((3, 2)) b = torch.ones((2, 1)) result = torch.dot(a, b) This code throws the following error: RuntimeError: 1D tensors expected, but got 2D and 2D tensors How do I perform matrix multiplication in PyTorch? A: You can use "@" for computing a dot product between two tensors in pytorch. a = torch.tensor([[1,2], [3,4]]) b = torch.tensor([[5,6], [7,8]]) c = a@b #For dot product c d = a*b #For elementwise multiplication d A: To perform a matrix (rank 2 tensor) multiplication, use any of the following equivalent ways: AB = A.mm(B) AB = torch.mm(A, B) AB = torch.matmul(A, B) AB = A @ B # Python 3.5+ only There are a few subtleties. From the PyTorch documentation: torch.mm does not broadcast. For broadcasting matrix products, see torch.matmul(). For instance, you cannot multiply two 1-dimensional vectors with torch.mm, nor multiply batched matrices (rank 3). To this end, you should use the more versatile torch.matmul. For an extensive list of the broadcasting behaviours of torch.matmul, see the documentation. For element-wise multiplication, you can simply do (if A and B have the same shape) A * B # element-wise matrix multiplication (Hadamard product) A: Use torch.mm: torch.mm(a, b) torch.dot() behaves differently to np.dot(). There's been some discussion about what would be desirable here. Specifically, torch.dot() treats both a and b as 1D vectors (irrespective of their original shape) and computes their inner product. The error is thrown because this behaviour makes your a a vector of length 6 and your b a vector of length 2; hence their inner product can't be computed. For matrix multiplication in PyTorch, use torch.mm(). Numpy's np.dot() in contrast is more flexible; it computes the inner product for 1D arrays and performs matrix multiplication for 2D arrays. torch.matmul performs matrix multiplications if both arguments are 2D and computes their dot product if both arguments are 1D. For inputs of such dimensions, its behaviour is the same as np.dot. It also lets you do broadcasting or matrix x matrix, matrix x vector and vector x vector operations in batches. # 1D inputs, same as torch.dot a = torch.rand(n) b = torch.rand(n) torch.matmul(a, b) # torch.Size([]) # 2D inputs, same as torch.mm a = torch.rand(m, k) b = torch.rand(k, j) torch.matmul(a, b) # torch.Size([m, j]) A: Use torch.mm(a, b) or torch.matmul(a, b) Both are same. >>> torch.mm <built-in method mm of type object at 0x11712a870> >>> torch.matmul <built-in method matmul of type object at 0x11712a870> There's one more option that may be good to know. That is @ operator. @Simon H. >>> a = torch.randn(2, 3) >>> b = torch.randn(3, 4) >>> a@b tensor([[ 0.6176, -0.6743, 0.5989, -0.1390], [ 0.8699, -0.3445, 1.4122, -0.5826]]) >>> a.mm(b) tensor([[ 0.6176, -0.6743, 0.5989, -0.1390], [ 0.8699, -0.3445, 1.4122, -0.5826]]) >>> a.matmul(b) tensor([[ 0.6176, -0.6743, 0.5989, -0.1390], [ 0.8699, -0.3445, 1.4122, -0.5826]]) The three give the same results. Related links: Matrix multiplication operator PEP 465 -- A dedicated infix operator for matrix multiplication
stackoverflow
{ "language": "en", "length": 526, "provenance": "stackexchange_0000F.jsonl.gz:859273", "question_score": "87", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524901" }
80ba3dd4ffd01f23534a090b018a969e45517a7c
Stackoverflow Stackexchange Q: Return a std::stringstream - Compilation Fail I'm trying to compile this code: #include <sstream> std::stringstream foo() { std::stringstream log; log << "Hello there\n"; return log; } GCC 4.9.2 gives me the following error (with -std=c++11): [x86-64 gcc 4.9.2] error: use of deleted function 'std::basic_stringstream<char>::basic_stringstream(const std::basic_stringstream<char>&)' Here an example. Since the std::stringstream has move constructor, why the copy constructor is invoked, instead of the move constructor? Note: from GCC 5 the code compile correctly: see here. A: If we take a look at the GCC 5 changes we can see: Full support for C++11, including the following new features: * *std::deque and std::vector meet the allocator-aware container requirements; *movable and swappable iostream classes; ... The change in bold is what's making your code compile on GCC 5 and fail to compile on 4.9, the move constructor simply wasn't implemented yet for std::stringstream.
Q: Return a std::stringstream - Compilation Fail I'm trying to compile this code: #include <sstream> std::stringstream foo() { std::stringstream log; log << "Hello there\n"; return log; } GCC 4.9.2 gives me the following error (with -std=c++11): [x86-64 gcc 4.9.2] error: use of deleted function 'std::basic_stringstream<char>::basic_stringstream(const std::basic_stringstream<char>&)' Here an example. Since the std::stringstream has move constructor, why the copy constructor is invoked, instead of the move constructor? Note: from GCC 5 the code compile correctly: see here. A: If we take a look at the GCC 5 changes we can see: Full support for C++11, including the following new features: * *std::deque and std::vector meet the allocator-aware container requirements; *movable and swappable iostream classes; ... The change in bold is what's making your code compile on GCC 5 and fail to compile on 4.9, the move constructor simply wasn't implemented yet for std::stringstream.
stackoverflow
{ "language": "en", "length": 142, "provenance": "stackexchange_0000F.jsonl.gz:859286", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524935" }
83f6a77318ce93d6844dca009c6fd82bddb19ac5
Stackoverflow Stackexchange Q: Monitoring Google Container Engine Disk Use in Percent with Stackdriver Is there a way of checking the disk usage for volumes/pd's in the google cluster with stackdriver? We found a way to check for bytes used, but this is pretty useless when there is no way to compare it to the limits (usage in percent)... Also there is no bytes free metric. I read about custom metrics, but did not understand how those might help here. Is there a way of adding a policy which does what we need (possibly via the api)? A: You should be able to find this information using: Metric: container/container/disk/bytes_total Filter: project_id: <id> pod_id: <id> device_name: Volume:<name_of_volume> You can use similar metric for bytes used.
Q: Monitoring Google Container Engine Disk Use in Percent with Stackdriver Is there a way of checking the disk usage for volumes/pd's in the google cluster with stackdriver? We found a way to check for bytes used, but this is pretty useless when there is no way to compare it to the limits (usage in percent)... Also there is no bytes free metric. I read about custom metrics, but did not understand how those might help here. Is there a way of adding a policy which does what we need (possibly via the api)? A: You should be able to find this information using: Metric: container/container/disk/bytes_total Filter: project_id: <id> pod_id: <id> device_name: Volume:<name_of_volume> You can use similar metric for bytes used.
stackoverflow
{ "language": "en", "length": 121, "provenance": "stackexchange_0000F.jsonl.gz:859314", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525022" }
a93db408d703507b7dbe81d700eaf327bb66a64b
Stackoverflow Stackexchange Q: How do I add a BackgroundImage to a ListView in Xamarin Forms? I am aware that there is no property called BackgroundImage for the ListView class - only the BackgroundColor property but that is not what I am looking for. Is there a way to add a background image to the ListView so that when I scroll the image stays in place and the tiles simply 'move' over the image as you scroll. Adding the image to the ContentPage also does not work since the ListView simply overlays it. A: Your post is really good, you just miss a doble quotations marks on the Source property Set your ListView's BackgroundColor to Transparent: <RelativeLayout> <Image Source="background.png" BackgroundColor="Transparent" VerticalOptions="CenterAndExpand" HorizontalOptions="CenterAndExpand"/> <ListView x:Name="listView" VerticalOptions="CenterAndExpand" HorizontalOptions="CenterAndExpand" BackgroundColor="Transparent" ItemTapped="OnItemTapped" ItemsSource="{Binding .}" /> </RelativeLayout>
Q: How do I add a BackgroundImage to a ListView in Xamarin Forms? I am aware that there is no property called BackgroundImage for the ListView class - only the BackgroundColor property but that is not what I am looking for. Is there a way to add a background image to the ListView so that when I scroll the image stays in place and the tiles simply 'move' over the image as you scroll. Adding the image to the ContentPage also does not work since the ListView simply overlays it. A: Your post is really good, you just miss a doble quotations marks on the Source property Set your ListView's BackgroundColor to Transparent: <RelativeLayout> <Image Source="background.png" BackgroundColor="Transparent" VerticalOptions="CenterAndExpand" HorizontalOptions="CenterAndExpand"/> <ListView x:Name="listView" VerticalOptions="CenterAndExpand" HorizontalOptions="CenterAndExpand" BackgroundColor="Transparent" ItemTapped="OnItemTapped" ItemsSource="{Binding .}" /> </RelativeLayout>
stackoverflow
{ "language": "en", "length": 129, "provenance": "stackexchange_0000F.jsonl.gz:859328", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525068" }
f48e614d8dc512ffaac6f26a819b4df6ee9a0504
Stackoverflow Stackexchange Q: DomPopupSourceFactory provider error when using material-dropdown-select I am trying to use material dropdown select but I am getting this error: EXCEPTION: No provider found for DomPopupSourceFactory. The materialDirectives is added to the directives list, the html call is simple: <material-dropdown-select></material-dropdown-select> I tried the angular_components_example and it worked fine. The problem is with my project. I already tried to clean the .packages and executed the pub get. Nothing worked. I tried some other material components and they worked. A: If you add materialProviders to AppComponent it should work: @Component( selector: 'my-app', directives: const <dynamic>[ CORE_DIRECTIVES, materialDirectives, ], providers: const <dynamic>[ materialProviders, // <<<<<<<<<<<<<<<< ], ) class AppComponent {...}
Q: DomPopupSourceFactory provider error when using material-dropdown-select I am trying to use material dropdown select but I am getting this error: EXCEPTION: No provider found for DomPopupSourceFactory. The materialDirectives is added to the directives list, the html call is simple: <material-dropdown-select></material-dropdown-select> I tried the angular_components_example and it worked fine. The problem is with my project. I already tried to clean the .packages and executed the pub get. Nothing worked. I tried some other material components and they worked. A: If you add materialProviders to AppComponent it should work: @Component( selector: 'my-app', directives: const <dynamic>[ CORE_DIRECTIVES, materialDirectives, ], providers: const <dynamic>[ materialProviders, // <<<<<<<<<<<<<<<< ], ) class AppComponent {...} A: It works in angular_components example because the app-level component includes the necessary popupBindings provider. If you aren't including materialProviders in your app, you can use a more specific provider in your components. Here is the minimum boilerplate required for using material-dropdown-select: import 'package:angular/angular.dart'; import 'package:angular_components/laminate/popup/module.dart'; import 'package:angular_components/material_select/material_dropdown_select.dart'; @Component( selector: 'my-dropdown-select', directives: const [ MaterialDropdownSelectComponent, ], providers: const [ popupBindings, ], ) class MyDropdownSelectComponent {}
stackoverflow
{ "language": "en", "length": 173, "provenance": "stackexchange_0000F.jsonl.gz:859342", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525109" }
5ab2b57a761481082a64f169f8bf7595accb3c56
Stackoverflow Stackexchange Q: How to set headers to $.get() or $.post() function How do I add authorization headers to the following ajax request? $.get(urlAPI + "/api/account/get", function (data) { alert(data); }, 'json'); A: based on jquery page $.get() document from version 1.12 there is a settings option to pass the base settings to the method. jQuery.get( [settings ] ) or $.get( [settings] ) settings is a plain object so you can use it like { url: 'http://requesturl.io', data: {}, headers: { 'your-custom-header': 'custom-header-value' } }
Q: How to set headers to $.get() or $.post() function How do I add authorization headers to the following ajax request? $.get(urlAPI + "/api/account/get", function (data) { alert(data); }, 'json'); A: based on jquery page $.get() document from version 1.12 there is a settings option to pass the base settings to the method. jQuery.get( [settings ] ) or $.get( [settings] ) settings is a plain object so you can use it like { url: 'http://requesturl.io', data: {}, headers: { 'your-custom-header': 'custom-header-value' } } A: $.ajax({ url: 'foo/bar', headers: { 'x-my-custom-header': 'some value' } }); A: You can define ajaxsetup before your request $ .get () to include the authorization in the header $.ajaxSetup({ headers:{ 'Authorization': "auth username and password" } }); $.get(urlAPI + "/api/account/get", function (data) { alert(data); }, 'json'); A: here a simple way $http.get('www.google.com/someapi', { headers: {'Authorization': 'Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ=='} });
stackoverflow
{ "language": "en", "length": 142, "provenance": "stackexchange_0000F.jsonl.gz:859358", "question_score": "14", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525167" }
e2fcc2e788b9d217ea7ee9fe924f1fb31b8a5572
Stackoverflow Stackexchange Q: What is the maximum length of Firebase a user property value? I'm passing along some client-generated user properties into Firebase Analytics and encountered the following message in logcat: W/FA: Value is too long; discarded. Value kind, name, value length: user property, comp0, 37 D/FA: Logging event (FE): error(_err), Bundle[{firebase_event_origin(_o)=auto, firebase_error_length(_el)=37, firebase_error_value(_ev)=comp0, firebase_error(_err)=7}] I looked up error code 7 in the Firebase Analytics Error Codes page, and while it reveals that the code means "user property value is too long", it doesn't specify what the maximum length is. What's the maximum length of user property values? Is there a maximum length for key names, too? A: The documentation for FirebaseAnalytics.UserProperty reveals the answer: UserProperty names can be up to 24 characters long, may only contain alphanumeric characters and underscores ("_"), and must start with an alphabetic character. UserProperty values can be up to 36 characters long.
Q: What is the maximum length of Firebase a user property value? I'm passing along some client-generated user properties into Firebase Analytics and encountered the following message in logcat: W/FA: Value is too long; discarded. Value kind, name, value length: user property, comp0, 37 D/FA: Logging event (FE): error(_err), Bundle[{firebase_event_origin(_o)=auto, firebase_error_length(_el)=37, firebase_error_value(_ev)=comp0, firebase_error(_err)=7}] I looked up error code 7 in the Firebase Analytics Error Codes page, and while it reveals that the code means "user property value is too long", it doesn't specify what the maximum length is. What's the maximum length of user property values? Is there a maximum length for key names, too? A: The documentation for FirebaseAnalytics.UserProperty reveals the answer: UserProperty names can be up to 24 characters long, may only contain alphanumeric characters and underscores ("_"), and must start with an alphabetic character. UserProperty values can be up to 36 characters long.
stackoverflow
{ "language": "en", "length": 146, "provenance": "stackexchange_0000F.jsonl.gz:859375", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525216" }
6ad9202ba3a1a913fe239e00f862495cef3d5667
Stackoverflow Stackexchange Q: How does yield behave when we pass it an action in Ember? I have the following code: Component template {{#link-to "user.profile" account.id disabled=user.online}} {{yield}} {{/link-to}} Template {{#my-component data=x}} <button> MY BUTTON </button> {{/my-component}} I use the component in different templates and I'd like the yielded elements to have an action. I've read you can use it like this but I can't really grasp the behaviour. {{#link-to "user.profile" account.id disabled=user.online}} {{yield (action "showModal")}} {{/link-to}} Can anyone shed some light on this subject? A: Here its usage: {{#my-component as |act|}} <button onclick={{action act}}>Button</button> {{/my-component}} Here is working twiddle. To understand more: here is a good blog post. This is a one of the three posts of the writer about contextual components.
Q: How does yield behave when we pass it an action in Ember? I have the following code: Component template {{#link-to "user.profile" account.id disabled=user.online}} {{yield}} {{/link-to}} Template {{#my-component data=x}} <button> MY BUTTON </button> {{/my-component}} I use the component in different templates and I'd like the yielded elements to have an action. I've read you can use it like this but I can't really grasp the behaviour. {{#link-to "user.profile" account.id disabled=user.online}} {{yield (action "showModal")}} {{/link-to}} Can anyone shed some light on this subject? A: Here its usage: {{#my-component as |act|}} <button onclick={{action act}}>Button</button> {{/my-component}} Here is working twiddle. To understand more: here is a good blog post. This is a one of the three posts of the writer about contextual components.
stackoverflow
{ "language": "en", "length": 120, "provenance": "stackexchange_0000F.jsonl.gz:859393", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525262" }
85589890d026882d591563f5e3016ce73e9fb2ad
Stackoverflow Stackexchange Q: How to make animation to Ngx-Bootstrap dropdown? I am using ngx-bootstrap for my project especially Dropdown. I want to add some animations to here, but don't know how to do that. Is there any recommended way to add animation to ngx-bootstrap/dropdown?
Q: How to make animation to Ngx-Bootstrap dropdown? I am using ngx-bootstrap for my project especially Dropdown. I want to add some animations to here, but don't know how to do that. Is there any recommended way to add animation to ngx-bootstrap/dropdown?
stackoverflow
{ "language": "en", "length": 42, "provenance": "stackexchange_0000F.jsonl.gz:859397", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525268" }
12446bf26601ac5ccb5d954c271a7f3b497de64a
Stackoverflow Stackexchange Q: Curl Could not resolve host in Elasticsearch I am learning to using ElasticSearch on Windows recently. I followed the instructions as this page said and got stuck at Checking that Elasticsearch is running. I had downloaded the latest curl binary executable (7.54.0) for windows, but when I copied the following line using the button COPY AS CURL: curl -XGET 'localhost:9200/?pretty' It gave the error: curl: (6) Could not resolve host: 'localhost I had tried the solution here to disable IPV6 but the problem still remains. A: In my case, curl somehow doesn't recognize the ' symbol. Changing ' to " can fix the problem. curl -XGET "localhost:9200/?pretty"
Q: Curl Could not resolve host in Elasticsearch I am learning to using ElasticSearch on Windows recently. I followed the instructions as this page said and got stuck at Checking that Elasticsearch is running. I had downloaded the latest curl binary executable (7.54.0) for windows, but when I copied the following line using the button COPY AS CURL: curl -XGET 'localhost:9200/?pretty' It gave the error: curl: (6) Could not resolve host: 'localhost I had tried the solution here to disable IPV6 but the problem still remains. A: In my case, curl somehow doesn't recognize the ' symbol. Changing ' to " can fix the problem. curl -XGET "localhost:9200/?pretty"
stackoverflow
{ "language": "en", "length": 108, "provenance": "stackexchange_0000F.jsonl.gz:859410", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525305" }
4c95bd36964f2b4763e14c9c30716e1c587769fc
Stackoverflow Stackexchange Q: How can I stop a Lua coroutine from outside the function? I'm trying to create a dispatcher which schedules multiple coroutines. The dispatcher needs to pause the coroutine, I can't figure out how to do this. Update Instead of kill, I meant to pause the coroutine from the outside. A: You can kill a coroutine by setting a debug hook on it that calls error() from that hook. The next time the hook is called, it will trigger error() call, which will abort the coroutine: local co = coroutine.create(function() while true do print(coroutine.yield()) end end) coroutine.resume(co, 1) coroutine.resume(co, 2) debug.sethook(co, function()error("almost dead")end, "l") print(coroutine.resume(co, 3)) print(coroutine.status(co)) This prints: 2 3 false coro-kill.lua:6: almost dead dead
Q: How can I stop a Lua coroutine from outside the function? I'm trying to create a dispatcher which schedules multiple coroutines. The dispatcher needs to pause the coroutine, I can't figure out how to do this. Update Instead of kill, I meant to pause the coroutine from the outside. A: You can kill a coroutine by setting a debug hook on it that calls error() from that hook. The next time the hook is called, it will trigger error() call, which will abort the coroutine: local co = coroutine.create(function() while true do print(coroutine.yield()) end end) coroutine.resume(co, 1) coroutine.resume(co, 2) debug.sethook(co, function()error("almost dead")end, "l") print(coroutine.resume(co, 3)) print(coroutine.status(co)) This prints: 2 3 false coro-kill.lua:6: almost dead dead A: library that will yield when you return true in the hook that been set with debug.sethook(co, function() return true end, "y") the library is enough to create multitask lua system just run require("yieldhook") at very first of your code further info at the git https://github.com/evg-zhabotinsky/yieldhook A: use coroutine.yield(coroutine-you-want-to-pause)
stackoverflow
{ "language": "en", "length": 165, "provenance": "stackexchange_0000F.jsonl.gz:859454", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525457" }
d04ef5ba0dcb948c2e643d7058d3bf4d17c50e89
Stackoverflow Stackexchange Q: UWP How to know if an enum value is added in a later version For my UWP app, in order to support minimum version of 10240, I have to write adaptive code, i.e. detect the presence of particular API at runtime. But in order to do so, I need to know if an API was present or not in 10240 or was added in a later version. It's OK for classes since it's written (normally at the end of each page) in Microsoft's documentation, but how do I get that info for added methods or enum values? For example, from this page https://learn.microsoft.com/en-us/windows/uwp/debug-test-perf/version-adaptive-code#adaptive-code-examples, it's written that ChatWithoutEmoji is added in 14393 (1607). But in documentation page at https://learn.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Input.InputScopeNameValue, it's only written that enum is present in 10240, but nowhere I can know one of its values, ChatWithoutEmoji, is present only from 14393. [Update] Note that I know already how to detect API at runtime, my question is how to known when I need do runtime check.
Q: UWP How to know if an enum value is added in a later version For my UWP app, in order to support minimum version of 10240, I have to write adaptive code, i.e. detect the presence of particular API at runtime. But in order to do so, I need to know if an API was present or not in 10240 or was added in a later version. It's OK for classes since it's written (normally at the end of each page) in Microsoft's documentation, but how do I get that info for added methods or enum values? For example, from this page https://learn.microsoft.com/en-us/windows/uwp/debug-test-perf/version-adaptive-code#adaptive-code-examples, it's written that ChatWithoutEmoji is added in 14393 (1607). But in documentation page at https://learn.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Input.InputScopeNameValue, it's only written that enum is present in 10240, but nowhere I can know one of its values, ChatWithoutEmoji, is present only from 14393. [Update] Note that I know already how to detect API at runtime, my question is how to known when I need do runtime check.
stackoverflow
{ "language": "en", "length": 167, "provenance": "stackexchange_0000F.jsonl.gz:859463", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525486" }
c5803286bfa7bb70e29a730a9fb30704b18a666e
Stackoverflow Stackexchange Q: Is it possible to create a Scheduled Rule from CloudWatch for a Lambda State Function Set I want to use CloudFormation to create a stack of preexisting Lambda Functions into a State Machine using Step Functions on a schedule (30 mins). I have successfully created the stack for my other methods. In essence, I need help or guidance on how to create a scheduled event in CloudFormation for Step Functions. Here is what I have been trying: "NOTDScheduler": { "Type": "AWS::Events::Rule", "Properties": { "Description": "Schedules a NOTD every 30 minutes", "ScheduleExpression": "rate(30 minutes)", "State": "ENABLED", "Targets": [ { "Arn": "${statemachineARN}", "statemachineARN": { "Fn::GetAtt": [ "NOTDStateMachine", "Arn" ] }, "Id": "NOTDScheduleTarget" } ] }, But I keep getting errors such as [Error] /Resources/NOTDScheduler/Properties/Targets/0/statemachineARN/Fn::GetAtt: Resource type AWS::StepFunctions::StateMachine does not support attribute {Arn}. and have no clue how Arn isnt a supported attribute. Is there a workaround? A: To get the ARN of a AWS::StepFunctions::StateMachine resource you need to call !Ref NOTDStateMachine instead of !GetAtt NOTDStateMachine.Arn Check Return Values here: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html
Q: Is it possible to create a Scheduled Rule from CloudWatch for a Lambda State Function Set I want to use CloudFormation to create a stack of preexisting Lambda Functions into a State Machine using Step Functions on a schedule (30 mins). I have successfully created the stack for my other methods. In essence, I need help or guidance on how to create a scheduled event in CloudFormation for Step Functions. Here is what I have been trying: "NOTDScheduler": { "Type": "AWS::Events::Rule", "Properties": { "Description": "Schedules a NOTD every 30 minutes", "ScheduleExpression": "rate(30 minutes)", "State": "ENABLED", "Targets": [ { "Arn": "${statemachineARN}", "statemachineARN": { "Fn::GetAtt": [ "NOTDStateMachine", "Arn" ] }, "Id": "NOTDScheduleTarget" } ] }, But I keep getting errors such as [Error] /Resources/NOTDScheduler/Properties/Targets/0/statemachineARN/Fn::GetAtt: Resource type AWS::StepFunctions::StateMachine does not support attribute {Arn}. and have no clue how Arn isnt a supported attribute. Is there a workaround? A: To get the ARN of a AWS::StepFunctions::StateMachine resource you need to call !Ref NOTDStateMachine instead of !GetAtt NOTDStateMachine.Arn Check Return Values here: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html
stackoverflow
{ "language": "en", "length": 169, "provenance": "stackexchange_0000F.jsonl.gz:859479", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525535" }
6025e61af1a08ddb192dd3ecc9850b060b2fd4a5
Stackoverflow Stackexchange Q: 3D model does not show up in AR I'm using the PlacingObjects ARKit example and i'm trying to add my own object to it. My object is replacing the candle. When I launch the app on my phone and try to produce the object, I click it and nothing happens. All the other objects work fine. Here is the swift file for my object. import Foundation import SceneKit class Turret: VirtualObject { override init() { super.init(modelName: "Turret", fileExtension: "scn", thumbImageFilename: "candle", title: "Turret") } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } } The thumb image remains candle because it would crash if I opened the menu, probably because its trying to open a thumbnail that doesn't exist. I've been looking in all of the files to try and find out why, but I couldn't find anything.
Q: 3D model does not show up in AR I'm using the PlacingObjects ARKit example and i'm trying to add my own object to it. My object is replacing the candle. When I launch the app on my phone and try to produce the object, I click it and nothing happens. All the other objects work fine. Here is the swift file for my object. import Foundation import SceneKit class Turret: VirtualObject { override init() { super.init(modelName: "Turret", fileExtension: "scn", thumbImageFilename: "candle", title: "Turret") } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } } The thumb image remains candle because it would crash if I opened the menu, probably because its trying to open a thumbnail that doesn't exist. I've been looking in all of the files to try and find out why, but I couldn't find anything.
stackoverflow
{ "language": "en", "length": 141, "provenance": "stackexchange_0000F.jsonl.gz:859490", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525562" }
d3aecf704c691d04b19363fda21b32c0d2672be7
Stackoverflow Stackexchange Q: Google cloud deployment manager update Container cluster I'm trying to create a Google Cloud Deployment Manager configuration to deploy and manage a Google Cloud Container cluster. So far, creating a configuration to create a cluster works, however updating fails. If I change a setting, the execution of the script fails with an error message I can't decipher: code: RESOURCE_ERROR location: /deployments/my-first-cluster/resources/my-first-test-cluster-setup message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid JSON payload received. Unknown name \"cluster\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"@type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid JSON payload received. Unknown name \"cluster\": Cannot find field."}]}],"statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/*****/zones/europe-west1-b/clusters/my-first-cluster"}}' The relevant configuration: resources: - name: my-first-test-cluster-setup type: container.v1.cluster properties: zone: europe-west1-b cluster: name: my-first-cluster description: My first cluster setup nodePools: - name: my-cluster-node-pool config: machineType: n1-standard-1 initialNodeCount: 1 autoscaling: enabled: true minNodeCount: 3 maxNodeCount: 5 management: autoUpgrade: true autoRepair: true A: It looks like this is a bug in Deployment Manager which means that it is not able to update GKE clusters. The bug is reported here. It has the same strange 'unknown name "cluster"' message that you see. There is no suggestion on the ticket about workarounds or resolution. We have seen this same problem when updating a different cluster property.
Q: Google cloud deployment manager update Container cluster I'm trying to create a Google Cloud Deployment Manager configuration to deploy and manage a Google Cloud Container cluster. So far, creating a configuration to create a cluster works, however updating fails. If I change a setting, the execution of the script fails with an error message I can't decipher: code: RESOURCE_ERROR location: /deployments/my-first-cluster/resources/my-first-test-cluster-setup message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid JSON payload received. Unknown name \"cluster\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"@type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid JSON payload received. Unknown name \"cluster\": Cannot find field."}]}],"statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/*****/zones/europe-west1-b/clusters/my-first-cluster"}}' The relevant configuration: resources: - name: my-first-test-cluster-setup type: container.v1.cluster properties: zone: europe-west1-b cluster: name: my-first-cluster description: My first cluster setup nodePools: - name: my-cluster-node-pool config: machineType: n1-standard-1 initialNodeCount: 1 autoscaling: enabled: true minNodeCount: 3 maxNodeCount: 5 management: autoUpgrade: true autoRepair: true A: It looks like this is a bug in Deployment Manager which means that it is not able to update GKE clusters. The bug is reported here. It has the same strange 'unknown name "cluster"' message that you see. There is no suggestion on the ticket about workarounds or resolution. We have seen this same problem when updating a different cluster property.
stackoverflow
{ "language": "en", "length": 186, "provenance": "stackexchange_0000F.jsonl.gz:859527", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525692" }
3168a1e4fab429cc40c42a4f6444c2c330f904a9
Stackoverflow Stackexchange Q: angular build size with sass is huge So, I'm working on big app and all components have their own sass file style (we use ViewEncapsulation.Native) but I build npm run build --stats-json --prod --aot and check the stats with https://chrisbateman.github.io/webpack-visualizer/ I get this All those big orange blocs on the right are sass.shim.ngstyle.ts files and each one is like 195k ! A: so after some investigation the problem was that every component was importing the _mixins.sass but the mixins was also importing bootstrap-custom.sass file that was quite large in size. the solution was to import _bootstrap-custom.sass from the _main.sass and import bootstrap-custom-variables.sass from mixins.sass file because some custom mixins needed those variables.
Q: angular build size with sass is huge So, I'm working on big app and all components have their own sass file style (we use ViewEncapsulation.Native) but I build npm run build --stats-json --prod --aot and check the stats with https://chrisbateman.github.io/webpack-visualizer/ I get this All those big orange blocs on the right are sass.shim.ngstyle.ts files and each one is like 195k ! A: so after some investigation the problem was that every component was importing the _mixins.sass but the mixins was also importing bootstrap-custom.sass file that was quite large in size. the solution was to import _bootstrap-custom.sass from the _main.sass and import bootstrap-custom-variables.sass from mixins.sass file because some custom mixins needed those variables.
stackoverflow
{ "language": "en", "length": 113, "provenance": "stackexchange_0000F.jsonl.gz:859540", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525732" }
5abb678e950eb99d4c4ed6cca1cffb6e77209cd9
Stackoverflow Stackexchange Q: Angular CLI: Inject css on Sass compile without full reload? We have our Angular 4 app scaffolded with angular cli, using scss as the default styling. We run the app with ng serve --sourcemap --extractCss -o To get scss source maps. This works fine, app compiles, runs, source maps work, etc. However, coming from the Angular1/Gulp/Browsersync world, I am missing the injection of the built css without a full page reload. Currently, whenever I edit a sass file, webpack compiles and reloads the page in Chrome. Is this the only way to work now? Is there no way to simply force a refresh of the css without a reload (like browsersync did it in the Gulp days)? A: This is not exactly the same as CSS injection, but will make your page reload & compile a hell of a lot faster! With Angular 7 you can follow this guide to enable HMR. (Hot Module Replacement) It will also make reloading your .ts files very fast! Small addendum: I think you can infact load the CSS changed by injection but following this piece of the HMR documentation
Q: Angular CLI: Inject css on Sass compile without full reload? We have our Angular 4 app scaffolded with angular cli, using scss as the default styling. We run the app with ng serve --sourcemap --extractCss -o To get scss source maps. This works fine, app compiles, runs, source maps work, etc. However, coming from the Angular1/Gulp/Browsersync world, I am missing the injection of the built css without a full page reload. Currently, whenever I edit a sass file, webpack compiles and reloads the page in Chrome. Is this the only way to work now? Is there no way to simply force a refresh of the css without a reload (like browsersync did it in the Gulp days)? A: This is not exactly the same as CSS injection, but will make your page reload & compile a hell of a lot faster! With Angular 7 you can follow this guide to enable HMR. (Hot Module Replacement) It will also make reloading your .ts files very fast! Small addendum: I think you can infact load the CSS changed by injection but following this piece of the HMR documentation
stackoverflow
{ "language": "en", "length": 187, "provenance": "stackexchange_0000F.jsonl.gz:859546", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525743" }
8cbc9f9a6deb5f0e25d361e68340a5a31fa5091f
Stackoverflow Stackexchange Q: Global Angular CLI version greater than local version When running ng serve I get this warning about my global CLI version being greater than my local version. I don't notice any issues from this warning, but I was wondering if the two versions should be in sync? Also, Is it necessary to have a local version if you have a global version? The warning: Your global Angular CLI version (1.1.1) is greater than your local version (1.0.6). The local Angular CLI version is used. A: Update Angular CLI for workspace (Local) npm install --save-dev @angular/cli@latest Note: Make sure to install the global version using the command with ‘-g’ is if it installed properly npm install -g @angular/cli@latest Run Update command to get a list of all dependencies required to be upgraded ng update Next Run update command as below for each individual Angular core package ng update @angular/cli @angular/core However, I had to add ‘–force’ and ‘–allow-dirty’ flags additionally to fix all other pending issues ng update @angular/cli @angular/core --allow-dirty --force
Q: Global Angular CLI version greater than local version When running ng serve I get this warning about my global CLI version being greater than my local version. I don't notice any issues from this warning, but I was wondering if the two versions should be in sync? Also, Is it necessary to have a local version if you have a global version? The warning: Your global Angular CLI version (1.1.1) is greater than your local version (1.0.6). The local Angular CLI version is used. A: Update Angular CLI for workspace (Local) npm install --save-dev @angular/cli@latest Note: Make sure to install the global version using the command with ‘-g’ is if it installed properly npm install -g @angular/cli@latest Run Update command to get a list of all dependencies required to be upgraded ng update Next Run update command as below for each individual Angular core package ng update @angular/cli @angular/core However, I had to add ‘–force’ and ‘–allow-dirty’ flags additionally to fix all other pending issues ng update @angular/cli @angular/core --allow-dirty --force A: Run the following Command: npm install --save-dev @angular/cli@latest After running the above command the console might popup the below message The Angular CLI configuration format has been changed, and your existing configuration can be updated automatically by running the following command: ng update @angular/cli A: In my case I just used this command into project: ng update @angular/cli A: Two ways to solve this global and local angular CLI version issue. 1. Keep a specific angular-cli version for both environments. 2. Goto latest angular-cli version for both environments. 1. Specific angular-cli version First, find out which angular version you want to keep on the global and local environment. ng --version for example: here we keeping local angular CLI version 8.3.27 So, we have to change the global version also on 8.3.27. use cmd> npm install --save-dev @angular/[email protected] -g here, '-g' flag for a set global angular-cli version. 2. Goto latest angular version for both CLI environment. npm install --save-dev @angular/cli@latest -g npm install --save-dev @angular/cli@latest A: To answer one of the questions, it is necessary to have both a global and local install for the tools to work. If you try to run ng serve on an application without the local install of the CLI (global install only), you will get the following error. You have to be inside an Angular CLI project in order to use the serve command. It will also print this message: Please take the following steps to avoid issues: "npm install --save-dev @angular/cli@latest" Run that npm command to update the CLI locally, and avoid the warning that you are getting. Other question: It looks like they do not have to be in sync, but it's probably best that they are in order to avoid any unusual behavior with the tool, or any inconsistencies with the code the tool generates. Why do we need both the global install, and a local install? The global install is needed to start a new application. The ng new <app-name> command is run using the global installation of the CLI. In fact, if you try to run ng new while inside the folder structure of an existing CLI application, you get this lovely error: You cannot use the new command inside an Angular CLI project. Other commands that can be run from the global install are ng help, ng get/set with the --global option, ng version, ng doc, and ng completion. The local install of the CLI is used after an application has been built. This way, when new versions of the CLI are available, you can update your global install, and not affect the local install. This is good for the stability of a project. Most ng commands only make sense with the local version, like lint, build and serve, etc. According to the CLI GitHub readme, to update the CLI you must update the global and local package. However, I have used the CLI where the global and local version vary without any trouble so far. If I ever run across an error related to having the global and local CLI versions out of sync, I will post that here. A: This works for me: it will update local version to latest npm uninstall --save-dev angular-cli npm install --save-dev @angular/cli@latest npm install to verify version ng --version A: I'm not fluent in English but if I understand the problem, is it that locally in the project you have an older version of CLI than globally? And would you like to use this global newer instead of the local older one? If so, a very simple method is enough to run in the project directory npm link @angular/cli more in the subject on the page: https://docs.npmjs.com/cli/link A: You just need to update the AngularCli npm install --save-dev @angular/cli@latest A: First find out the global angular-cli version by running ng --version The above code will show what version is the global and local angular-cli versions. If you want the global and local angular cli to be same you can just do npm install --save-dev @angular/[email protected] where 1.7.4 is your global angular-cli version Then if you run ng serve --open your code should run. A: This is how I fixed it. in Visual Studio Code's terminal, First cache clean npm cache clean --force Then updated cli ng update @angular/cli If any module missing after this, use below command npm install A: When you use the Angular framework in your projects, it has two different versions: a global Angular version and a local Angular version, installed directly in your project. It is the difference between its two versions that explains the display of the error message "Global Angular CLI version greater than local version". You have to understand the difference between the two versions to then be able to solve this problem. to solve that, run this command: npm install --save-dev @angular/cli@latest npm install --save-dev @angular/cli@latest A: npm uninstall -g @angular/cli npm cache verify npm install -g @angular/cli@latest Then in your Local project package: rm -rf node_modules dist npm install --save-dev @angular/cli@latest npm i ng update @angular/cli ng update @angular/core npm install --save-dev @angular-devkit/build-angular Was getting below error Error: Unexpected end of JSON input Unexpected end of JSON input Above steps helped from this post Can't update angular to version 6 A: There is another way to avoid the global installation to create a new application. In my case I'm using Angular 9 but the customer requires Angular 8. # create an empty directories mkdir angular-8-cli mkdir my-angular-8-project # init empty npm project cd angular-8-cli npm init -y # install local angular 8 cli npm i @angular/cli@8 # go to your angular 8 project cd ../my-angular-8-project # use previously installed angular 8 cli to create a new angular 8 project ../angular-8-cli/node_modules/.bin/ng new my-angular-8-project --directory=. A: This is how I solved the issue. Copy and run these commands: ng version npm install --save-dev @angular/cli@latest ng version A: if you upgraded your Angular Version, you need to change the version of @angular-devkit/build-angular inside your package.json from your old version to the new angular build version upgraded. I had upgraded to Angular 10, so i needed to go to https://www.npmjs.com/package/@angular-devkit/build-angular and check which is my version according to Angular 10. In my case, i founded that the version needs to be 0.1001.7, so i changed my old version to this version in my package.json and run npm --save install That was enough. A: * *ng version npm install --save-dev @angular/cli@latest * *Close the command prompt and open again *ng version *if your PowerShell not recognize the ng command run this command in your powershell : Set-ExecutionPolicy -scope currentuser -executionpolicy remotesigned A: npm uninstall --save-dev angular-cli npm install --save-dev @angular/cli@latest Your existing configuration can be updated automatically by running the following command: ng update @angular/cli or: npm install A: If you just wish to turn off this warning, run the command ng config -g cli.warnings.versionMismatch false This is useful because some times you want to have different local & global versions. For example, you might have the latest Angular for your global version as you'll use it on new projects, but will have to run an old project in older Angular version. A: // install npm check updates npm i -g npm -check-updates // Run npm-check-updates npm -u // you should then get a list of all your packages to be updated to the newest version // Install the updated packages as prompted npm install A: To solve the error, update your global or local version so that the versions of the Angular CLI match. SOLUTION 1) npm install @angular/cli@latest --save-dev [OR] SOLUTION 2) To install a specific version of the Angular CLI locally npm install @angular/[email protected] --save-dev To install a specific version of the Angular CLI globally npm install -g @angular/[email protected] If you get an error with commands, add ( --legacy-peer-deps) property. A: It is caused because global and local angular versions are different. To update global angular version, first you need to run the following command in command prompt or vs code terminal npm install --save-dev @angular/cli@latest After that if there are any vulnerability found then run the following command to fix them npm audit fix A: I did this and it worked npm ng install /config --save-dev -dev @angular/cli@latest @angular-devkit/build-angular -global -g @angular/cli@latest A: I was getting this warning even if I am not in any angular project, later I realized that I have mistakenly created the angular project in the root folder directly and forgot to delete the node_modules, package.json, and package-lock.json So no matter which directory I move into I was getting this warning. If you want to find where your local version is coming from then run the below command npm ls @angular/cli --depth=0 The above command will print the directory from where the current angular version is being used, and once you delete the node_modules the warning also disappeared A: this should solve the issue: ng update @angular/cli @angular/core A: npm install --save-dev @angular-devkit/build-angular - did helped ng update @angular/cli -> did create angular.json and other updates. Collecting installed dependencies... Found 58 dependencies. ** Executing migrations for package '@angular/cli' ** Updating karma configuration Updating configuration Removing old config file (.angular-cli.json) Writing config file (angular.json) Some configuration options have been changed, please make sure to update any npm scripts which you may have modified. DELETE .angular-cli.json CREATE angular.json (4394 bytes) CREATE browserslist (429 bytes) UPDATE karma.conf.js (993 bytes) UPDATE public/tsconfig.spec.json (295 bytes) UPDATE package.json (2618 bytes) UPDATE tsconfig.json (437 bytes) UPDATE tslint.json (3135 bytes) UPDATE public/polyfills.ts (587 bytes) UPDATE public/tsconfig.app.json (199 bytes) npm WARN @angular/[email protected] requires a peer of zone.js@^0.8.4 but none is installed. You must install peer dependencies yourself. A: Just do these things npm install --save-dev @angular/cli@latest npm audit fix npm audit fix --force A: Specify the module using the --module parameter. For example, if the main module is app.module.ts, run this: ng g c new-component --module app Or if you are in an other directory then ng g c component-name --module ../
stackoverflow
{ "language": "en", "length": 1838, "provenance": "stackexchange_0000F.jsonl.gz:859547", "question_score": "506", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525746" }
cb3a1deb67f6e36bf9d021582bc1445641bc15a1
Stackoverflow Stackexchange Q: Unit of width/height properties of an SVG Given this code of svg, a blue rectangle is being drawn. <svg> <rect width="50" height="200" style="fill:blue"/> </svg> The blue rectangle size varies depending on the view port. I assume that the unit 50 for the width property, is not just plain pixels. Otherwise it would have been the same across different screens. So what is exactly the meaning of this unit? A: The unit of measurement is pixels. You are not getting the correct result due to the behaviour of vector drawing in the svg element. If you specify a height and width on your svg you will find the rectangle behaves as expected. <svg width="400" height="400"> <rect width="50" height="200" style="fill:blue"/> </svg> The svg size should be the maximum extent of all content contained within it. If only a rectangle resides inside it then you can make the containing svg the size you desire and simply use height / width 100% on your rectangle. <svg width="50" height="200"> <rect width="100%" height="100%" style="fill:blue"/> </svg>
Q: Unit of width/height properties of an SVG Given this code of svg, a blue rectangle is being drawn. <svg> <rect width="50" height="200" style="fill:blue"/> </svg> The blue rectangle size varies depending on the view port. I assume that the unit 50 for the width property, is not just plain pixels. Otherwise it would have been the same across different screens. So what is exactly the meaning of this unit? A: The unit of measurement is pixels. You are not getting the correct result due to the behaviour of vector drawing in the svg element. If you specify a height and width on your svg you will find the rectangle behaves as expected. <svg width="400" height="400"> <rect width="50" height="200" style="fill:blue"/> </svg> The svg size should be the maximum extent of all content contained within it. If only a rectangle resides inside it then you can make the containing svg the size you desire and simply use height / width 100% on your rectangle. <svg width="50" height="200"> <rect width="100%" height="100%" style="fill:blue"/> </svg>
stackoverflow
{ "language": "en", "length": 170, "provenance": "stackexchange_0000F.jsonl.gz:859604", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44525896" }
01924d4c647fbcb597626f7e25473260421b3306
Stackoverflow Stackexchange Q: Save allure report in PDF and email I have setup allure reporting system with testng using Maven. My boss wants reports in emailable format or PDF format. Is there any why to save allure report in pdf? A: We have developed a simple executable to generate a docx file from Allure results, with option to generate a pdf file using some third-party libraries: https://github.com/typhoon-hil/allure-docx
Q: Save allure report in PDF and email I have setup allure reporting system with testng using Maven. My boss wants reports in emailable format or PDF format. Is there any why to save allure report in pdf? A: We have developed a simple executable to generate a docx file from Allure results, with option to generate a pdf file using some third-party libraries: https://github.com/typhoon-hil/allure-docx A: There is also another tool supported by eroshenkoam that allows you to create a pdf report based on your allure results. You can take a look at their README and make a try! https://github.com/eroshenkoam/allure-pdf Hope it helps. A: By Default, Allure reports are not a standalone / HTML Reports. the reports should be hosted in webserver. But if your using jenkins for Test Executions with Allure Plugin in Jenkins will take care of this with build wise executions and then u export that to PDF.
stackoverflow
{ "language": "en", "length": 151, "provenance": "stackexchange_0000F.jsonl.gz:859653", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526043" }
49e29d3c17dc9a9f96df7294c0cebf187a267f7b
Stackoverflow Stackexchange Q: Declarative models with a dynamic base (SQLAlchemy) I have a number of models defined in one place that needs to be shared to multiple codebases (that each only need access to a specific subset of the models). Specifically, a unified API needs to query all these models, but a number of data processing projects only needs to query specific models. from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Model1(Base): ... class Model2(Base): ... class Model3(Base): ... With the above as the basis, is it possible to have a "dynamic" Base so that project 1 can have a Base that contains only Model3, while project 2 has a Base that contains Model1 and Model3. A solution would be for each project to declare an "empty" Base, split the entire model up into segments, and for each project then to import each "sub-Base" that they would need, and merge that into their own empty Base. From the documentation of SQLALchemy I cannot tell if this is currently possible. Ideally, being able to import a specific model and add it to my own Base would be optimal.
Q: Declarative models with a dynamic base (SQLAlchemy) I have a number of models defined in one place that needs to be shared to multiple codebases (that each only need access to a specific subset of the models). Specifically, a unified API needs to query all these models, but a number of data processing projects only needs to query specific models. from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Model1(Base): ... class Model2(Base): ... class Model3(Base): ... With the above as the basis, is it possible to have a "dynamic" Base so that project 1 can have a Base that contains only Model3, while project 2 has a Base that contains Model1 and Model3. A solution would be for each project to declare an "empty" Base, split the entire model up into segments, and for each project then to import each "sub-Base" that they would need, and merge that into their own empty Base. From the documentation of SQLALchemy I cannot tell if this is currently possible. Ideally, being able to import a specific model and add it to my own Base would be optimal.
stackoverflow
{ "language": "en", "length": 185, "provenance": "stackexchange_0000F.jsonl.gz:859735", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526326" }
8f2ce73d8decf91f1c88f030aab671bc72c31a3f
Stackoverflow Stackexchange Q: How does Airflow's BranchPythonOperator work? I'm struggling to understand how BranchPythonOperator in Airflow works. I know it's primarily used for branching, but am confused by the documentation as to what to pass into a task and what I need to pass/expect from the task upstream. Given the simple example in the documentation on this page what would the source code look like for the upstream task called run_this_first and the 2 downstream ones that are branched? How exactly does Airflow know to run branch_a instead of branch_b? Where does the upstream task's` output get noticed/read? A: Your BranchPythonOperator is created with a python_callable, which will be a function. That function shall return, based on your business logic, the task name of the immediately downstream tasks that you have connected. This could be 1 to N tasks immediately downstream. There is nothing that the downstream tasks HAVE to read, however you could pass them metadata using xcom. def decide_which_path(): if something is True: return "branch_a" else: return "branch_b" branch_task = BranchPythonOperator( task_id='run_this_first', python_callable=decide_which_path, trigger_rule="all_done", dag=dag) branch_task.set_downstream(branch_a) branch_task.set_downstream(branch_b) It's important to set the trigger_rule or all of the rest will be skipped, as the default is all_success.
Q: How does Airflow's BranchPythonOperator work? I'm struggling to understand how BranchPythonOperator in Airflow works. I know it's primarily used for branching, but am confused by the documentation as to what to pass into a task and what I need to pass/expect from the task upstream. Given the simple example in the documentation on this page what would the source code look like for the upstream task called run_this_first and the 2 downstream ones that are branched? How exactly does Airflow know to run branch_a instead of branch_b? Where does the upstream task's` output get noticed/read? A: Your BranchPythonOperator is created with a python_callable, which will be a function. That function shall return, based on your business logic, the task name of the immediately downstream tasks that you have connected. This could be 1 to N tasks immediately downstream. There is nothing that the downstream tasks HAVE to read, however you could pass them metadata using xcom. def decide_which_path(): if something is True: return "branch_a" else: return "branch_b" branch_task = BranchPythonOperator( task_id='run_this_first', python_callable=decide_which_path, trigger_rule="all_done", dag=dag) branch_task.set_downstream(branch_a) branch_task.set_downstream(branch_b) It's important to set the trigger_rule or all of the rest will be skipped, as the default is all_success.
stackoverflow
{ "language": "en", "length": 196, "provenance": "stackexchange_0000F.jsonl.gz:859743", "question_score": "25", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526354" }
e71eb30abdf7cd1e7be5b9a9de9f9f7c842d7c37
Stackoverflow Stackexchange Q: Can Schema.org Expected Types be collections, too? A Schema.org object of type Person can have a sameAs property of type URL. According to Google's structured data site, the sameAs property can be a single item or an array. The docs on Schema.org do not mention whether sameAs can be a single item or an array. Is this just Google deviating from Schema.org? Or is it the case that all properties in Schema.org can be single items or arrays? A: Every Schema.org property can have multiple values. It doesn’t necessarily make sense for some properties (e.g., birthDate), but it’s still allowed. In JSON-LD: "sameAs": ["/foo", "/bar"], In Microdata: <link itemprop="sameAs" href="/foo" /> <link itemprop="sameAs" href="/bar" /> In RDFa: <link property="sameAs" href="/foo" /> <link property="sameAs" href="/bar" /> This doesn’t necessarily mean that Google (or any other consumer) supports this for every property, too. So when Google explicitly mentions this in their documentation, you can be sure that the respective search result feature works with multiple values.
Q: Can Schema.org Expected Types be collections, too? A Schema.org object of type Person can have a sameAs property of type URL. According to Google's structured data site, the sameAs property can be a single item or an array. The docs on Schema.org do not mention whether sameAs can be a single item or an array. Is this just Google deviating from Schema.org? Or is it the case that all properties in Schema.org can be single items or arrays? A: Every Schema.org property can have multiple values. It doesn’t necessarily make sense for some properties (e.g., birthDate), but it’s still allowed. In JSON-LD: "sameAs": ["/foo", "/bar"], In Microdata: <link itemprop="sameAs" href="/foo" /> <link itemprop="sameAs" href="/bar" /> In RDFa: <link property="sameAs" href="/foo" /> <link property="sameAs" href="/bar" /> This doesn’t necessarily mean that Google (or any other consumer) supports this for every property, too. So when Google explicitly mentions this in their documentation, you can be sure that the respective search result feature works with multiple values.
stackoverflow
{ "language": "en", "length": 165, "provenance": "stackexchange_0000F.jsonl.gz:859818", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526548" }
4d560a63b5e90af374532561e8b8c69d728842f4
Stackoverflow Stackexchange Q: Xmobar not visible when using with Xmonad Today I've started with Xmonad and can not get Xmobar to be visible on top of layouts At my .xmobarrc I has these code: ... , position = TopW L 100 , lowerOnStart = True , hideOnStart = False , allDesktops = True , overrideRedirect = True , pickBroadest = False , persistent = True ... And this is my xmonad.hs: import XMonad import XMonad.Hooks.DynamicLog import XMonad.Hooks.ManageDocks import XMonad.Util.Run(spawnPipe) import System.IO main = do xmproc <- spawnPipe "xmobar" xmonad $ defaultConfig { manageHook = manageDocks <+> manageHook defaultConfig , layoutHook = avoidStruts $ layoutHook defaultConfig , logHook = dynamicLogWithPP xmobarPP { ppOutput = hPutStrLn xmproc , ppTitle = xmobarColor "green" "" . shorten 50 } , terminal = "urxvt" , modMask = mod4Mask } Xmobar is running with Xmonad but it's not visible. How can I solve it? I need that Xmobar always be visible at the top of monitor. A: Solution founded at https://unix.stackexchange.com/questions/288037/ I add this handleEventHook = handleEventHook defaultConfig <+> docksEventHook and now Xmobar always visible.
Q: Xmobar not visible when using with Xmonad Today I've started with Xmonad and can not get Xmobar to be visible on top of layouts At my .xmobarrc I has these code: ... , position = TopW L 100 , lowerOnStart = True , hideOnStart = False , allDesktops = True , overrideRedirect = True , pickBroadest = False , persistent = True ... And this is my xmonad.hs: import XMonad import XMonad.Hooks.DynamicLog import XMonad.Hooks.ManageDocks import XMonad.Util.Run(spawnPipe) import System.IO main = do xmproc <- spawnPipe "xmobar" xmonad $ defaultConfig { manageHook = manageDocks <+> manageHook defaultConfig , layoutHook = avoidStruts $ layoutHook defaultConfig , logHook = dynamicLogWithPP xmobarPP { ppOutput = hPutStrLn xmproc , ppTitle = xmobarColor "green" "" . shorten 50 } , terminal = "urxvt" , modMask = mod4Mask } Xmobar is running with Xmonad but it's not visible. How can I solve it? I need that Xmobar always be visible at the top of monitor. A: Solution founded at https://unix.stackexchange.com/questions/288037/ I add this handleEventHook = handleEventHook defaultConfig <+> docksEventHook and now Xmobar always visible. A: Although many of the other solutions posted are also important, I had to add lowerOnStart = False to .xmobarrc, so it isn't sent to the bottom of the window stack on start.
stackoverflow
{ "language": "en", "length": 210, "provenance": "stackexchange_0000F.jsonl.gz:859821", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526555" }
928f32fff972f718cc4c5e44df162b3db9796b7a
Stackoverflow Stackexchange Q: JPA/Hibernate native query to get and entity and eagerly fetch its associations not working Hibernate 5.2 docs says: It is possible to eagerly join the Phone and the Person entities to avoid the possible extra roundtrip for initializing the many-to-one association. Example 475. JPA native query selecting entities with joined many-to-one association List<Phone> phones = entityManager.createNativeQuery( "SELECT * " + "FROM Phone ph " + "JOIN Person pr ON ph.person_id = pr.id", Phone.class ) .getResultList(); for(Phone phone : phones) { Person person = phone.getPerson(); } I'm running an example similar to this one. My query is just as simple as the above. But, when I do phone.getPerson() another query is sent to database to retrieve Person. I get no duplicate alias error nor column not found error. By running myself the query generated by Hibernate I can check that all the columns needed to fill both Entities are present. Also tried the Hibernate alternative of the query. It didn't work. Besides .addEntity() and .addJoin() are deprecated (though still in the manual examples).
Q: JPA/Hibernate native query to get and entity and eagerly fetch its associations not working Hibernate 5.2 docs says: It is possible to eagerly join the Phone and the Person entities to avoid the possible extra roundtrip for initializing the many-to-one association. Example 475. JPA native query selecting entities with joined many-to-one association List<Phone> phones = entityManager.createNativeQuery( "SELECT * " + "FROM Phone ph " + "JOIN Person pr ON ph.person_id = pr.id", Phone.class ) .getResultList(); for(Phone phone : phones) { Person person = phone.getPerson(); } I'm running an example similar to this one. My query is just as simple as the above. But, when I do phone.getPerson() another query is sent to database to retrieve Person. I get no duplicate alias error nor column not found error. By running myself the query generated by Hibernate I can check that all the columns needed to fill both Entities are present. Also tried the Hibernate alternative of the query. It didn't work. Besides .addEntity() and .addJoin() are deprecated (though still in the manual examples).
stackoverflow
{ "language": "en", "length": 173, "provenance": "stackexchange_0000F.jsonl.gz:859825", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526568" }
8bc6f19f3b8c10b5fbe64785373a9a0a93d2092d
Stackoverflow Stackexchange Q: angular 4 style background image when using ngFor I have problem with background image url. i have array have images url how can I use it in background image url <div class="col-lg-3 col-md-3 col-sm-6" *ngFor="let course of courses"><a> </a><div class="box"><a> <div class="box-gray aligncenter" [style.backgroundColor]="course.imageUrl" > </div> </a><div class="box-bottom"><a > </a><a >{{course.name}}</a> </div> </div> </div> A: Refer to this one Angular2 dynamic background images // inappropriate style.backgroundColor [style.backgroundColor]="course.imageUrl" // style.backgroundImage [style.backgroundImage]="'url('+ course.imageUrl +')'"
Q: angular 4 style background image when using ngFor I have problem with background image url. i have array have images url how can I use it in background image url <div class="col-lg-3 col-md-3 col-sm-6" *ngFor="let course of courses"><a> </a><div class="box"><a> <div class="box-gray aligncenter" [style.backgroundColor]="course.imageUrl" > </div> </a><div class="box-bottom"><a > </a><a >{{course.name}}</a> </div> </div> </div> A: Refer to this one Angular2 dynamic background images // inappropriate style.backgroundColor [style.backgroundColor]="course.imageUrl" // style.backgroundImage [style.backgroundImage]="'url('+ course.imageUrl +')'" A: you can do it by adding url path in a single variable. for example bgImageVariable="www.domain.com/path/img.jpg"; and second way [ngStyle]="{'background-image': 'url(' + bgImageVariable + ')'}" A: thank you for your answers, the correct code is [ngStyle]="{'background-image':'url(' + course.imageUrl + ')'}"> A: You should use background instead of backgroundColor [style.background]="'url('+course.imageUrl+')'"
stackoverflow
{ "language": "en", "length": 122, "provenance": "stackexchange_0000F.jsonl.gz:859865", "question_score": "21", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526691" }
973a1b3b9b7c5aaebfffcb1d9512e54e022208e5
Stackoverflow Stackexchange Q: Error-page issue with class inherited from RuntimeException In my JSF Project (with framework primefaces), i defined in web.xml an error-page to display when java.lang.Exception is thrown. <error-page> <exception-type>java.lang.Exception</exception-type> <location>/erreur.xhtml</location> </error-page> That works fine when a RuntimeException is thrown (erreur.xhtml is displayed). I also created a class (called TechnicalException) inherited from RuntimeException. When a TechnicalException is thrown, I can't explain why the error page doesn't display. Same case when i'm specifying "TechnicalException" in "exception-type" tag of web.xml. When the TechnicalException is thrown, the request is still processing (favicon of the tab in processing mode) until session timeout. Have you any idea about this behaviour ? A: I might have an idea: if you override the getCause() method it can result in a loop. Check your getCause() method and avoid "return this;" Julien
Q: Error-page issue with class inherited from RuntimeException In my JSF Project (with framework primefaces), i defined in web.xml an error-page to display when java.lang.Exception is thrown. <error-page> <exception-type>java.lang.Exception</exception-type> <location>/erreur.xhtml</location> </error-page> That works fine when a RuntimeException is thrown (erreur.xhtml is displayed). I also created a class (called TechnicalException) inherited from RuntimeException. When a TechnicalException is thrown, I can't explain why the error page doesn't display. Same case when i'm specifying "TechnicalException" in "exception-type" tag of web.xml. When the TechnicalException is thrown, the request is still processing (favicon of the tab in processing mode) until session timeout. Have you any idea about this behaviour ? A: I might have an idea: if you override the getCause() method it can result in a loop. Check your getCause() method and avoid "return this;" Julien
stackoverflow
{ "language": "en", "length": 132, "provenance": "stackexchange_0000F.jsonl.gz:859869", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526705" }
06ce3be797bc918c68d1559841b1922aec443424
Stackoverflow Stackexchange Q: CasperJS Loading resource failed with status=fail (HTTP 0) Every time my script tries to open the "Create Job Posting" form of a site that I'm trying to crawl, the logs would always result to this: [info] [phantom] Step _step 9/18 http://www.samplesite.com/jobs/edit (HTTP 200) [info] [phantom] Step _step 9/18: done in 12780ms. [debug] [phantom] Navigation requested: url=about:blank, type=Other, willNavigate=true, isMainFrame=false [debug] [phantom] Navigation requested: url=http://www.samplesite.com/jobs/edit, type=Other, willNavigate=true, isMainFrame=false [warning] [phantom] Loading resource failed with status=fail: http://www.samplesite.com/jobs/edit [warning] [phantom] Casper.waitFor() timeout [info] [phantom] Step _step 10/18 http://www.samplesite.com/jobs/edit (HTTP 0) [info] [phantom] Step _step 10/18: done in 22814ms. [warning] [phantom] Casper.waitFor() timeout I tried to open the site manually yet it seems to be perfectly fine. Checked the error screenshot and HTML script of the last step with CasperJS and it would always look like this: enter image description here I tried to do some research for any workaround on this issue and some were suggesting on adding --ignore-ssl-errors=true or --ssl-protocol=any on my command but these didn't work on my end. On my JS script, I didn't add any special methods. I only did use the plain casper.click() or casper.open() in opening the page. I'm using casperjs 1.0.0 and phantomjs 2.1.1
Q: CasperJS Loading resource failed with status=fail (HTTP 0) Every time my script tries to open the "Create Job Posting" form of a site that I'm trying to crawl, the logs would always result to this: [info] [phantom] Step _step 9/18 http://www.samplesite.com/jobs/edit (HTTP 200) [info] [phantom] Step _step 9/18: done in 12780ms. [debug] [phantom] Navigation requested: url=about:blank, type=Other, willNavigate=true, isMainFrame=false [debug] [phantom] Navigation requested: url=http://www.samplesite.com/jobs/edit, type=Other, willNavigate=true, isMainFrame=false [warning] [phantom] Loading resource failed with status=fail: http://www.samplesite.com/jobs/edit [warning] [phantom] Casper.waitFor() timeout [info] [phantom] Step _step 10/18 http://www.samplesite.com/jobs/edit (HTTP 0) [info] [phantom] Step _step 10/18: done in 22814ms. [warning] [phantom] Casper.waitFor() timeout I tried to open the site manually yet it seems to be perfectly fine. Checked the error screenshot and HTML script of the last step with CasperJS and it would always look like this: enter image description here I tried to do some research for any workaround on this issue and some were suggesting on adding --ignore-ssl-errors=true or --ssl-protocol=any on my command but these didn't work on my end. On my JS script, I didn't add any special methods. I only did use the plain casper.click() or casper.open() in opening the page. I'm using casperjs 1.0.0 and phantomjs 2.1.1
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:859883", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526754" }
1ce8fa97707ede1daee9f2632b6a9563720776aa
Stackoverflow Stackexchange Q: How to perform tf.image.per_image_standardization on a batch of images in tensorflow I would like to know how to perform image whitening on a batch of images. According to the documentation in https://www.tensorflow.org/api_docs/python/tf/image/per_image_standardization, it is said that tf.image.per_image_standardization takes as input a 3D tensor, that is an image, of shape: [height, width, channels]. Is it a missing feature or there is a different method? Any help is much appreciated. A: This is how to perform this operation on a batch of images. tf.map_fn(lambda frame: tf.image.per_image_standardization(frame), frames)
Q: How to perform tf.image.per_image_standardization on a batch of images in tensorflow I would like to know how to perform image whitening on a batch of images. According to the documentation in https://www.tensorflow.org/api_docs/python/tf/image/per_image_standardization, it is said that tf.image.per_image_standardization takes as input a 3D tensor, that is an image, of shape: [height, width, channels]. Is it a missing feature or there is a different method? Any help is much appreciated. A: This is how to perform this operation on a batch of images. tf.map_fn(lambda frame: tf.image.per_image_standardization(frame), frames)
stackoverflow
{ "language": "en", "length": 86, "provenance": "stackexchange_0000F.jsonl.gz:859887", "question_score": "14", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526763" }
69cb89f2a003e13464ff645b0600f9dd4dfbf15d
Stackoverflow Stackexchange Q: Connect to TfsTeamProjectCollection using WindowsCredentials I want to connect to TfsProjectCollection using the same credentials I use to login to windows. Is this even possible? I am connecting now with alternative credentials having this code: NetworkCredential credential = new NetworkCredential(this._username, this._password); VssBasicCredential basicCred = new VssBasicCredential(credential); try { _tfsDataConnection = new TfsTeamProjectCollection(new Uri(this._tfsLink), basicCred); // VssData Part Thread.CurrentPrincipal = new WindowsPrincipal(WindowsIdentity.GetCurrent()); var url = new Uri(_tfsLink); VssCredentials vsc = new VssCredentials(new Microsoft.VisualStudio.Services.Common.WindowsCredential( new NetworkCredential(this._username, this._password))); VssConnection connection = new VssConnection(url, vsc); _vssDataConnection = connection.GetClient<BuildHttpClient>(); } I will need to get Builds and Projects from that server. This is what i tried but I get an error as like I am not authorized. _tfsDataConnection = new TfsTeamProjectCollection(new Uri(this._tfsLink)); try { VssCredentials vsc = new VssCredentials(new Microsoft.VisualStudio.Services.Common.WindowsCredential(CredentialCache.DefaultNetworkCredentials)); VssConnection connection = new VssConnection(new Uri(_tfsLink), vsc); _vssDataConnection = connection.GetClient<BuildHttpClient>(); } A: Do not pass any explicit credential: the classic client SDK will use the current user. _tfsDataConnection = new TfsTeamProjectCollection(new Uri(this._tfsLink)); _tfsDataConnection.Authenticate(); The REST SDK is similar, you should use the default constructor. VssConnection connection = new VssConnection(new Uri(_tfsLink), new VssCredentials()); _vssDataConnection = connection.GetClient<BuildHttpClient>();
Q: Connect to TfsTeamProjectCollection using WindowsCredentials I want to connect to TfsProjectCollection using the same credentials I use to login to windows. Is this even possible? I am connecting now with alternative credentials having this code: NetworkCredential credential = new NetworkCredential(this._username, this._password); VssBasicCredential basicCred = new VssBasicCredential(credential); try { _tfsDataConnection = new TfsTeamProjectCollection(new Uri(this._tfsLink), basicCred); // VssData Part Thread.CurrentPrincipal = new WindowsPrincipal(WindowsIdentity.GetCurrent()); var url = new Uri(_tfsLink); VssCredentials vsc = new VssCredentials(new Microsoft.VisualStudio.Services.Common.WindowsCredential( new NetworkCredential(this._username, this._password))); VssConnection connection = new VssConnection(url, vsc); _vssDataConnection = connection.GetClient<BuildHttpClient>(); } I will need to get Builds and Projects from that server. This is what i tried but I get an error as like I am not authorized. _tfsDataConnection = new TfsTeamProjectCollection(new Uri(this._tfsLink)); try { VssCredentials vsc = new VssCredentials(new Microsoft.VisualStudio.Services.Common.WindowsCredential(CredentialCache.DefaultNetworkCredentials)); VssConnection connection = new VssConnection(new Uri(_tfsLink), vsc); _vssDataConnection = connection.GetClient<BuildHttpClient>(); } A: Do not pass any explicit credential: the classic client SDK will use the current user. _tfsDataConnection = new TfsTeamProjectCollection(new Uri(this._tfsLink)); _tfsDataConnection.Authenticate(); The REST SDK is similar, you should use the default constructor. VssConnection connection = new VssConnection(new Uri(_tfsLink), new VssCredentials()); _vssDataConnection = connection.GetClient<BuildHttpClient>();
stackoverflow
{ "language": "en", "length": 181, "provenance": "stackexchange_0000F.jsonl.gz:859904", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526798" }
9defba501af92e7f8586595b05e30b85f29a2b99
Stackoverflow Stackexchange Q: How do you test the Cmd of an update function? I want to write a test that says, "If update is called with a GetData msg, it returns a (_, httpCmd). I'm not sure how to write this test. I know how to get the response as a (model, cmd), but I don't know how to parse the cmd to see what's inside it. How do people test the Cmd response of their update function? A: As of right now, Cmds are opaque - you can't see inside them. There is elm-testable which you could use, but it needs some preparation from your side. There is also a rewrite underway, that will, when done, allow you to keep your original code and test that directly.
Q: How do you test the Cmd of an update function? I want to write a test that says, "If update is called with a GetData msg, it returns a (_, httpCmd). I'm not sure how to write this test. I know how to get the response as a (model, cmd), but I don't know how to parse the cmd to see what's inside it. How do people test the Cmd response of their update function? A: As of right now, Cmds are opaque - you can't see inside them. There is elm-testable which you could use, but it needs some preparation from your side. There is also a rewrite underway, that will, when done, allow you to keep your original code and test that directly.
stackoverflow
{ "language": "en", "length": 126, "provenance": "stackexchange_0000F.jsonl.gz:859906", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526805" }
5dffc0c7c524f14820ef075af2950b8e4ddeea3f
Stackoverflow Stackexchange Q: Gradle project sync failed in Android Studio 2.3.3 I recently updated my Android Studio from 2.0 to 2.3.3. But when I imported my old projects it started showing Gradle project sync failed. Basic functionality will not work properly. And messages shows: Unknown host 'services.gradle.org'. You may need to adjust the proxy setting in Gradle. How can I solve this? A: Probably this is due to broken download of gradle , I too had this problem : + Download the latest gradle zip from : https://services.gradle.org/distributions + Extract the folder and replace with the folder in android_studio(where you installed it)/gradle/the_existing_gradle_folder Hope this helps. Thanks !
Q: Gradle project sync failed in Android Studio 2.3.3 I recently updated my Android Studio from 2.0 to 2.3.3. But when I imported my old projects it started showing Gradle project sync failed. Basic functionality will not work properly. And messages shows: Unknown host 'services.gradle.org'. You may need to adjust the proxy setting in Gradle. How can I solve this? A: Probably this is due to broken download of gradle , I too had this problem : + Download the latest gradle zip from : https://services.gradle.org/distributions + Extract the folder and replace with the folder in android_studio(where you installed it)/gradle/the_existing_gradle_folder Hope this helps. Thanks ! A: Reset all configurations and try again: Go to.. File > Settings > Appearance & Behavior > System Settings > HTTP Proxy [Under IDE Settings] Enable following option Auto-detect proxy settings You can also clean project on Build->Clean Project and File->Invalidade Caches/Restart. If Gradle location is wrong that can also cause this problem. Check on: File->Setting->Build, Execution, Deployment->Build Tools->Gradle Under Project level Setting find gradle directory. Grade Directory is usually C:\Users\username.gradle on windows... A: These instructions apply to a linux system. If the issue came after your recent update, then just point Android Studio to the location of gradle it was using before. to do this, go to file -> Settings -> Build, Execution, Deployment Grade selectGradle . On the Project level settings select Use local gradle distribution and select the folder where gradle was installed. Be sure to select show hidden folders (the very last icon) because the gradle folder is hidden. If you want this to be a global setting, do the same for the Global Gradle settings. Set the gradle folder there too. Click apply. and you are done!
stackoverflow
{ "language": "en", "length": 287, "provenance": "stackexchange_0000F.jsonl.gz:859920", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526833" }
1235c85064b603380cbd2c3c9f9935fe3c5623cc
Stackoverflow Stackexchange Q: Swagger UI can't load API docs over http/2 I'm trying to run Swashbuckle on an application hosted on Windows Server 2016 running HTTP/2 (aka SPDY). When I access the URL for the swagger api, I get ERR_SPDY_PROTOCOL_ERROR. I can access the UI page fine but it's empty because it can't retrieve the api docs. Has anyone run into this? Any solution? (This is with version 5.2.1.)
Q: Swagger UI can't load API docs over http/2 I'm trying to run Swashbuckle on an application hosted on Windows Server 2016 running HTTP/2 (aka SPDY). When I access the URL for the swagger api, I get ERR_SPDY_PROTOCOL_ERROR. I can access the UI page fine but it's empty because it can't retrieve the api docs. Has anyone run into this? Any solution? (This is with version 5.2.1.)
stackoverflow
{ "language": "en", "length": 67, "provenance": "stackexchange_0000F.jsonl.gz:859932", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526879" }
17538fd2cb240dc46e79dbbac8970123318633f0
Stackoverflow Stackexchange Q: segmentation fault SIGSEGV dependant on initialisation method I wrote a small function foo that changes a string. When I use the function, sometimes I receive a SIGSEGV-fault. This is dependent on how the string is initialized. In the calling function main, a string is initialized through memory allocation and calling strcpy. I can change that string correctly. The other string (TestString2) is initialized when I declared the variable. I cannot trim this string but get the SIGSEGV-fault. Why is this? #include <stdio.h> #include <stdlib.h> #include <string.h> void foo(char *Expr) { *Expr = 'a'; } int main() { char *TestString1; char *TestString2 = "test "; TestString1 = malloc (sizeof(char) * 100); strcpy(TestString1, "test "); foo(TestString1); foo(TestString2); return 0; } A: In the case of TestString2, you set it to the address of a string constant. These constants cannot be modified, and typically reside in a read-only section of memory. Because of this, you invoke undefined behavior which in this case manifests as a crash. The case of TestString1 is valid because it points to dynamically allocated memory which you are allowed to change.
Q: segmentation fault SIGSEGV dependant on initialisation method I wrote a small function foo that changes a string. When I use the function, sometimes I receive a SIGSEGV-fault. This is dependent on how the string is initialized. In the calling function main, a string is initialized through memory allocation and calling strcpy. I can change that string correctly. The other string (TestString2) is initialized when I declared the variable. I cannot trim this string but get the SIGSEGV-fault. Why is this? #include <stdio.h> #include <stdlib.h> #include <string.h> void foo(char *Expr) { *Expr = 'a'; } int main() { char *TestString1; char *TestString2 = "test "; TestString1 = malloc (sizeof(char) * 100); strcpy(TestString1, "test "); foo(TestString1); foo(TestString2); return 0; } A: In the case of TestString2, you set it to the address of a string constant. These constants cannot be modified, and typically reside in a read-only section of memory. Because of this, you invoke undefined behavior which in this case manifests as a crash. The case of TestString1 is valid because it points to dynamically allocated memory which you are allowed to change.
stackoverflow
{ "language": "en", "length": 183, "provenance": "stackexchange_0000F.jsonl.gz:859966", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44526988" }
f0c31854ab8190061e04e2dfe1d2d2217bdaaa53
Stackoverflow Stackexchange Q: Youtube subtitles not showing for embeded video - even the button is missing For all videos with subtitles are missing (even the button) when you embed the video in iframe (directly or by API) The google-example: https://support.google.com/youtube/answer/171780?hl=en If you click "watch on youtube" there is an "subtitle"-Button, that is missing in the embedded view. I tried the parameters cc_load_policy, cc_lang_pref and other stuff, but nothing works... Please help. A: My guess is that something changed between yesterday and today as I just posted the following question today: youtube-iframe-api closed captioning troubles Someone also pointed out code got changed YESTERDAY on the IFrame Player API. https://developers.google.com/youtube/player_parameters#Revision_History
Q: Youtube subtitles not showing for embeded video - even the button is missing For all videos with subtitles are missing (even the button) when you embed the video in iframe (directly or by API) The google-example: https://support.google.com/youtube/answer/171780?hl=en If you click "watch on youtube" there is an "subtitle"-Button, that is missing in the embedded view. I tried the parameters cc_load_policy, cc_lang_pref and other stuff, but nothing works... Please help. A: My guess is that something changed between yesterday and today as I just posted the following question today: youtube-iframe-api closed captioning troubles Someone also pointed out code got changed YESTERDAY on the IFrame Player API. https://developers.google.com/youtube/player_parameters#Revision_History A: Experiencing the same problem with our embedded videos on all browser platforms. Problem was first noticed on 6/13/17 on our end. The CC option is missing in all embedded videos despite the .srt files being available and having been working as expected previously. I hope YouTube gets this straightened out quickly. We are an international company of over 100,000 employees and we have already received feedback from hearing-impaired users that it has rendered our videos unusable. A: We're facing the same issue. On further inspection it seems the captions module is no longer provided to embeds. A: Seems it was a temporary bug by youtube... the subtitles are working again in embedded videos since 15.06.2017...
stackoverflow
{ "language": "en", "length": 222, "provenance": "stackexchange_0000F.jsonl.gz:859976", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527024" }
a24fbd3eb2adf3415a39a72c657d5f8f57a572be
Stackoverflow Stackexchange Q: Coloured Bullets NSTextList I am using a NSTextList in a text editor I am building. I am trying to change the colour of the bullets in NSTextList while still keeping the built in functionality of an NSTextList. So far I have tried overriding the shouldChangeText function in NSTextList to intercept when a bullet is being added and tried to replace it with my own coloured bullet. override func shouldChangeText(in affectedCharRange: NSRange, replacementString: String) { if replacementString == "\t\u{2022}\t" { let redBullet = NSAttributedString(string: "\t\u{2022}\t", attributes: [NSForegroundColorAttributeName: NSColor.red]) insertText(redBullet, replacementRange: affectedCharRange) return false } } However, this does not keep the functionality of a NSTextList and does not distinguish whether the bullet is already coloured or not. Any advice on how to do this would be greatly appreciated.
Q: Coloured Bullets NSTextList I am using a NSTextList in a text editor I am building. I am trying to change the colour of the bullets in NSTextList while still keeping the built in functionality of an NSTextList. So far I have tried overriding the shouldChangeText function in NSTextList to intercept when a bullet is being added and tried to replace it with my own coloured bullet. override func shouldChangeText(in affectedCharRange: NSRange, replacementString: String) { if replacementString == "\t\u{2022}\t" { let redBullet = NSAttributedString(string: "\t\u{2022}\t", attributes: [NSForegroundColorAttributeName: NSColor.red]) insertText(redBullet, replacementRange: affectedCharRange) return false } } However, this does not keep the functionality of a NSTextList and does not distinguish whether the bullet is already coloured or not. Any advice on how to do this would be greatly appreciated.
stackoverflow
{ "language": "en", "length": 128, "provenance": "stackexchange_0000F.jsonl.gz:860002", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527109" }
eaefcbd5c14eb75dd82da85b7de83f979a93acc9
Stackoverflow Stackexchange Q: Deploying an Azure Function with Visual Team Services Build and release server We have a large solution with 50+ projects. Inside of that solution we have an Azure Function project using the new Visual Studio 2017 preview. Meaning its no longer using csx files, but using dlls. We can not use CI but would instead like to have the entire solution build on committing to a certain branch, which we have this portion set up. What Im wondering is how you package the azure function project and then manually release it using VSTS release. So far I've found nothing compatible or using the new Azure Function style. A: I've set up our function app project to build and deploy with VSTS - we're doing an automatic release rather than a manual trigger but this should still point you in the right direction. The steps to set up this up with VSTS are detailed in this blog post. Also, here's a link to the original discussion issue on Azure Function tooling github.
Q: Deploying an Azure Function with Visual Team Services Build and release server We have a large solution with 50+ projects. Inside of that solution we have an Azure Function project using the new Visual Studio 2017 preview. Meaning its no longer using csx files, but using dlls. We can not use CI but would instead like to have the entire solution build on committing to a certain branch, which we have this portion set up. What Im wondering is how you package the azure function project and then manually release it using VSTS release. So far I've found nothing compatible or using the new Azure Function style. A: I've set up our function app project to build and deploy with VSTS - we're doing an automatic release rather than a manual trigger but this should still point you in the right direction. The steps to set up this up with VSTS are detailed in this blog post. Also, here's a link to the original discussion issue on Azure Function tooling github.
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:860035", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527200" }
60c09d14b5fe922377901e9b23bcfa44f81ad818
Stackoverflow Stackexchange Q: CSS Pointer Events – Accept Drag, Reject Click tldr; I need an element to register drag and drop pointer events, but pass click and other pointer events to elements behind it. I am building a drag and drop photo upload feature in react using react-dropzone. I want the dropzone to be over the whole page, so if you drag a file onto any part of the page, you can drop it to upload the image. The dropzone is transparent when no file is dragged over it, so I need clicks to register with elements behind it. To accomplish this, I gave the dropzone component the following style: position: fixed; top: 0; left: 0; right: 0; bottom: 0; pointer-events: none; However, pointer-events: none; causes the dropzone to not recognize the necessary drag and drop events. Is there any way to recognize these specific pointer events, while passing others (like click) to elements behind the dropzone? A: Try using the draggable attribute. It worked for me <p draggable="true"> jkjfj </p>
Q: CSS Pointer Events – Accept Drag, Reject Click tldr; I need an element to register drag and drop pointer events, but pass click and other pointer events to elements behind it. I am building a drag and drop photo upload feature in react using react-dropzone. I want the dropzone to be over the whole page, so if you drag a file onto any part of the page, you can drop it to upload the image. The dropzone is transparent when no file is dragged over it, so I need clicks to register with elements behind it. To accomplish this, I gave the dropzone component the following style: position: fixed; top: 0; left: 0; right: 0; bottom: 0; pointer-events: none; However, pointer-events: none; causes the dropzone to not recognize the necessary drag and drop events. Is there any way to recognize these specific pointer events, while passing others (like click) to elements behind the dropzone? A: Try using the draggable attribute. It worked for me <p draggable="true"> jkjfj </p> A: UPDATED ANSWER: #dropzone{ position: fixed; top: 0; left: 0; width: 100%; height: 100%; z-index: 10; //set this to make it sit on the top of everything pointer-events: none; } .user-is-dragging #dropzone{ pointer-events: all !important; } //element declarations const dropzone = document.getElementById("dropzone"); const body = document.body; //timeout function to help detect when the user is dragging something let dragHandle; // utility function to detect drag & drop support function dragDropSupported() { var div = document.createElement('div'); return ('draggable' in div) || ('ondragstart' in div && 'ondrop' in div); } function initDragDrop(){ //simply exit / do other stuff if drag & drop is not supported if(!dragDropSupported()){ console.warn("Drag & drop not supported"); return; } //add user-is-dragging class which enables pointer events for the drop event body.addEventListener("dragover", (e) => { body.classList.add("user-is-dragging"); clearTimeout(dragHandle); dragHandle = setTimeout(() => { body.classList.remove("user-is-dragging"); }, 200); }); //this is to prevent the browser from opening the dragged file(s) dropzone.addEventListener('dragover', (e) => { e.preventDefault(); }); dropzone.addEventListener('drop', (e) => { //prevent the browser from opening the dragged file(s) e.preventDefault(); //dragged files const files = e.dataTransfer.files; console.log(files); }) } A: I recently had a similar issue and managed to solve it by, setting z-index for dropzone to 1, while setting z-index for say elements to 2, with position relative. A: I fixed this error by setting pointer-events to be none on .file-drop but auto on .file-drop > .file-drop-target.file-drop-dragging-over-frame
stackoverflow
{ "language": "en", "length": 394, "provenance": "stackexchange_0000F.jsonl.gz:860051", "question_score": "26", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527246" }
8b47c62fd9c96e050d6fdac1b55e4b08336f29d5
Stackoverflow Stackexchange Q: Reduce list based off of element substrings I'm looking for the most efficient way to reduce a given list based off of substrings already in the list. For example mylist = ['abcd','abcde','abcdef','qrs','qrst','qrstu'] would be reduced to: mylist = ['abcd','qrs'] because both 'abcd' and 'qrs' are the smallest substring of other elements in that list. I was able to do this with about 30 lines of code, but I suspect there is a crafty one-liner out there.. A: this seems to be working (but not so efficient i suppose) def reduce_prefixes(strings): sorted_strings = sorted(strings) return [element for index, element in enumerate(sorted_strings) if all(not previous.startswith(element) and not element.startswith(previous) for previous in sorted_strings[:index])] tests: >>>reduce_prefixes(['abcd', 'abcde', 'abcdef', 'qrs', 'qrst', 'qrstu']) ['abcd', 'qrs'] >>>reduce_prefixes(['abcd', 'abcde', 'abcdef', 'qrs', 'qrst', 'qrstu', 'gabcd', 'gab', 'ab']) ['ab', 'gab', 'qrs']
Q: Reduce list based off of element substrings I'm looking for the most efficient way to reduce a given list based off of substrings already in the list. For example mylist = ['abcd','abcde','abcdef','qrs','qrst','qrstu'] would be reduced to: mylist = ['abcd','qrs'] because both 'abcd' and 'qrs' are the smallest substring of other elements in that list. I was able to do this with about 30 lines of code, but I suspect there is a crafty one-liner out there.. A: this seems to be working (but not so efficient i suppose) def reduce_prefixes(strings): sorted_strings = sorted(strings) return [element for index, element in enumerate(sorted_strings) if all(not previous.startswith(element) and not element.startswith(previous) for previous in sorted_strings[:index])] tests: >>>reduce_prefixes(['abcd', 'abcde', 'abcdef', 'qrs', 'qrst', 'qrstu']) ['abcd', 'qrs'] >>>reduce_prefixes(['abcd', 'abcde', 'abcdef', 'qrs', 'qrst', 'qrstu', 'gabcd', 'gab', 'ab']) ['ab', 'gab', 'qrs'] A: One solution is to iterate over all the strings and split them based on if they had different characters, and recursively apply that function. def reduce_substrings(strings): return list(_reduce_substrings(map(iter, strings))) def _reduce_substrings(strings): # A dictionary of characters to a list of strings that begin with that character nexts = {} for string in strings: try: nexts.setdefault(next(string), []).append(string) except StopIteration: # Reached the end of this string. It is the only shortest substring. yield '' return for next_char, next_strings in nexts.items(): for next_substrings in _reduce_substrings(next_strings): yield next_char + next_substrings This splits it into a dictionary based on the character, and tries to find the shortest substring out of those that it split into a different list in the dictionary. Of course, because of the recursive nature of this function, a one-liner wouldn't be possible as efficiently. A: Probably not the most efficient, but at least short: mylist = ['abcd','abcde','abcdef','qrs','qrst','qrstu'] outlist = [] for l in mylist: if any(o.startswith(l) for o in outlist): # l is a prefix of some elements in outlist, so it replaces them outlist = [ o for o in outlist if not o.startswith(l) ] + [ l ] if not any(l.startswith(o) for o in outlist): # l has no prefix in outlist yet, so it becomes a prefix candidate outlist.append(l) print(outlist) A: Try this one: import re mylist = ['abcd','abcde','abcdef','qrs','qrst','qrstu'] new_list=[] for i in mylist: if re.match("^abcd$",i): new_list.append(i) elif re.match("^qrs$",i): new_list.append(i) print(new_list) #['abcd', 'qrs']
stackoverflow
{ "language": "en", "length": 369, "provenance": "stackexchange_0000F.jsonl.gz:860062", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527272" }
fc98a8a58f6ab0329c5a9292ef041f5bedcb0015
Stackoverflow Stackexchange Q: How to include an internal reference in a code block? In my Sphinx .rst document I have a code block containing a tree view of the structure of my product using the UNIX tree command: |── parent |   |── child |   |── grandchild It's in a code block so that Sphinx preserves the whitespaces. I want readers to be able to click on each node to follow an internal hyperlink to the part of the document that describes that node. However, adding a :ref: inside the code block doesn't work (see below). Does anyone know how to achieve this? This doesn't work: .. _parent: Parent ------ Blah blah .. _child: Child ----- Blah blah .. _grandchild: Grandchild ---------- Blah blah Then...: |── :ref:`parent` | |── :ref:`child` | |── :ref:`grandchild` A: You can use the parsed-literal directive: .. parsed-literal:: |── :ref:`parent` | |── :ref:`child` | |── :ref:`grandchild` This works, but there are warning messages saying "WARNING: Inline substitution_reference start-string without end-string." The vertical bars are interpreted as parts of substitution references. The warnings go away with some escaping: .. parsed-literal:: \|── :ref:`parent` | \|── :ref:`child` | \|── :ref:`grandchild`
Q: How to include an internal reference in a code block? In my Sphinx .rst document I have a code block containing a tree view of the structure of my product using the UNIX tree command: |── parent |   |── child |   |── grandchild It's in a code block so that Sphinx preserves the whitespaces. I want readers to be able to click on each node to follow an internal hyperlink to the part of the document that describes that node. However, adding a :ref: inside the code block doesn't work (see below). Does anyone know how to achieve this? This doesn't work: .. _parent: Parent ------ Blah blah .. _child: Child ----- Blah blah .. _grandchild: Grandchild ---------- Blah blah Then...: |── :ref:`parent` | |── :ref:`child` | |── :ref:`grandchild` A: You can use the parsed-literal directive: .. parsed-literal:: |── :ref:`parent` | |── :ref:`child` | |── :ref:`grandchild` This works, but there are warning messages saying "WARNING: Inline substitution_reference start-string without end-string." The vertical bars are interpreted as parts of substitution references. The warnings go away with some escaping: .. parsed-literal:: \|── :ref:`parent` | \|── :ref:`child` | \|── :ref:`grandchild` A: .. code-block:: is for literal code and does not get parsed except for syntax highlighting. Instead you could use a CSS class my-special-class to apply styles to the tree, and write CSS styles similar to HTML's <pre> or <code>. You will also need to escape | as \| because reST tries to parse | as a substitution. reST: .. rst-class:: my-special-class \|── :ref:`parent` \| \|── :ref:`child` \| \|── :ref:`grandchild` CSS: .my-special-class { font-family: monospace; white-space: pre; }
stackoverflow
{ "language": "en", "length": 266, "provenance": "stackexchange_0000F.jsonl.gz:860100", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527391" }
c70c9d5fe49479a607a7ea17d6433e587cafa9e7
Stackoverflow Stackexchange Q: How to pass multiple custom metrics (eval_metric) in python xgboost? The folloiwng code is not working, where aucerr and aoeerr are custom evaluation metrics, it is working with just one eval_metric either aucerr or aoeerr prtXGB.fit(trainData, targetVar, early_stopping_rounds=10, eval_metric= [aucerr, aoeerr], eval_set=[(valData, valTarget)]) However, the following code with in-built evaluation metrics is working prtXGB.fit(trainData, targetVar, early_stopping_rounds=10, eval_metric= ['auc', 'logloss'], eval_set=[(valData, valTarget)]) Here are my custom functions def aucerr(y_predicted, y_true): labels = y_true.get_label() auc1 = metrics.roc_auc_score(labels,y_predicted) return 'AUCerror', abs(1-auc1) def aoeerr(y_predicted, y_true): labels = y_true.get_label() actuals = sum(labels) predicted = sum(y_predicted) ae = actuals/predicted return 'AOEerror', abs(1-ae)
Q: How to pass multiple custom metrics (eval_metric) in python xgboost? The folloiwng code is not working, where aucerr and aoeerr are custom evaluation metrics, it is working with just one eval_metric either aucerr or aoeerr prtXGB.fit(trainData, targetVar, early_stopping_rounds=10, eval_metric= [aucerr, aoeerr], eval_set=[(valData, valTarget)]) However, the following code with in-built evaluation metrics is working prtXGB.fit(trainData, targetVar, early_stopping_rounds=10, eval_metric= ['auc', 'logloss'], eval_set=[(valData, valTarget)]) Here are my custom functions def aucerr(y_predicted, y_true): labels = y_true.get_label() auc1 = metrics.roc_auc_score(labels,y_predicted) return 'AUCerror', abs(1-auc1) def aoeerr(y_predicted, y_true): labels = y_true.get_label() actuals = sum(labels) predicted = sum(y_predicted) ae = actuals/predicted return 'AOEerror', abs(1-ae)
stackoverflow
{ "language": "en", "length": 97, "provenance": "stackexchange_0000F.jsonl.gz:860126", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527485" }
1562b5016eb38e3f3d8217f0071a238e61e93000
Stackoverflow Stackexchange Q: Consecutive Memory Allocation I used the following code to find the difference between the address of two consecutive blocks of memory in an array. Although the printed size of each item('int' here) is 4, the difference between the address of two consecutive blocks containing that item turns out to be 1. Shouldn't the difference be 4? #include<iostream> #include<vector> #include<algorithm> using namespace std; int main() { int a[10]; for(int i=0;i<10;i++) cin>>a[i]; cout<<"size of 1 block = "<<sizeof(int)<<"\n"; for(int i=1;i<10;i++){ // cout<<"Difference between address of consecutive memory blocks\n"; cout<<" "<<&(a[i])- &(a[i-1])<<"\n"; } return 0; } Output size of 1 block = 4 1 1 1 1 1 1 1 1 1 A: It's because the pointer arithmetics. The type of &(a[i]) is int* which points to a memory block which can store an int. If you step this pointer by one, it will point to the next memory block which can store another int. This is the reason why the difference between two int* is 1. You can print out what do you try to achieve by casting the int* pointer type to char* pointer type and then calculate the difference: cout << (reinterpret_cast<char*>(&a[i]) - reinterpret_cast<char*>(&a[i - 1]));
Q: Consecutive Memory Allocation I used the following code to find the difference between the address of two consecutive blocks of memory in an array. Although the printed size of each item('int' here) is 4, the difference between the address of two consecutive blocks containing that item turns out to be 1. Shouldn't the difference be 4? #include<iostream> #include<vector> #include<algorithm> using namespace std; int main() { int a[10]; for(int i=0;i<10;i++) cin>>a[i]; cout<<"size of 1 block = "<<sizeof(int)<<"\n"; for(int i=1;i<10;i++){ // cout<<"Difference between address of consecutive memory blocks\n"; cout<<" "<<&(a[i])- &(a[i-1])<<"\n"; } return 0; } Output size of 1 block = 4 1 1 1 1 1 1 1 1 1 A: It's because the pointer arithmetics. The type of &(a[i]) is int* which points to a memory block which can store an int. If you step this pointer by one, it will point to the next memory block which can store another int. This is the reason why the difference between two int* is 1. You can print out what do you try to achieve by casting the int* pointer type to char* pointer type and then calculate the difference: cout << (reinterpret_cast<char*>(&a[i]) - reinterpret_cast<char*>(&a[i - 1])); A: The "difference measure" is number of ints, not chars – the difference between two pointers of type T* is the number of objects of type T between them. Note that if int* p = &a[k]; then p + (&(a[i])- &(a[i-1])) == p + 1 that is, adding the difference between two consecutive elements to p gives p + 1, which is exactly what you would expect. A: You can get the value you expected by casting to integers, such as unsigned long long: cout << " " << (unsigned long long)&a[i] - (unsigned long long)&a[i-1] << "\n"; Casting to unsigned int enough for 32-bit system. If not casted, then it does pointer arithmetic, resulting in 1.
stackoverflow
{ "language": "en", "length": 314, "provenance": "stackexchange_0000F.jsonl.gz:860149", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527570" }
12e1e6ab3eab843b89bddb5d52a987da6ad76b11
Stackoverflow Stackexchange Q: How to use Hibernate with IntelliJ community edition? As we know, the Hibernate is supported only in Ultimate Edition of Intellij IDEA. This point is stressed in similar unanswered question too. So I'd like to achieve a partial result with my Community Edition. Namely, I want to create conditions to build and run my RDBMS application. No word about the full spectra of IDEA support of RDBMS development. To enable Hibernate we have to (according to Intellij support): 1) Create a Hibernate configuration file hibernate.cfg.xml. 2) Download the library files that implement the Hibernate framework and add them to the dependencies of the corresponding module. Is it the right way? If so - what are the libraries I have to download (I intend to use JPA)? A: In the documentation, they say that: This feature is supported in the Ultimate edition only.
Q: How to use Hibernate with IntelliJ community edition? As we know, the Hibernate is supported only in Ultimate Edition of Intellij IDEA. This point is stressed in similar unanswered question too. So I'd like to achieve a partial result with my Community Edition. Namely, I want to create conditions to build and run my RDBMS application. No word about the full spectra of IDEA support of RDBMS development. To enable Hibernate we have to (according to Intellij support): 1) Create a Hibernate configuration file hibernate.cfg.xml. 2) Download the library files that implement the Hibernate framework and add them to the dependencies of the corresponding module. Is it the right way? If so - what are the libraries I have to download (I intend to use JPA)? A: In the documentation, they say that: This feature is supported in the Ultimate edition only. A: You can empower your IntelliJ IDEA Community Edition using the JPA Buddy plugin: https://plugins.jetbrains.com/plugin/15075-jpa-buddy. A: You are able to use Hibernate with IntelliJ Community like any other library. Indeed, only the Ultimate version offers Hibernate support but you can still add the dependency to the project then configure everything manually.
stackoverflow
{ "language": "en", "length": 194, "provenance": "stackexchange_0000F.jsonl.gz:860157", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527596" }
01a54c3eff5cb34fbd6ca95b5200d6fbd285e36f
Stackoverflow Stackexchange Q: Get first and last elements in array, ES6 way let array = [1,2,3,4,5,6,7,8,9,0] Documentation is something like this [first, ...rest] = array will output 1 and the rest of array Now is there a way to take only the first and the last element 1 & 0 with Destructuring ex: [first, ...middle, last] = array I know how to take the first and last elements the other way but I was wondering if it is possible with es6 A: The rest parameter can only use at the end not anywhere else in the destructuring so it won't work as you expected. Instead, you can destructor certain properties(an array is also an object in JS), for example, 0 for first and index of the last element for last. let array = [1,2,3,4,5,6,7,8,9,0] let {0 : a ,[array.length - 1] : b} = array; console.log(a, b) Or its better way to extract length as an another variable and get last value based on that ( suggested by @Bergi) , it would work even there is no variable which refers the array. let {0 : a ,length : l, [l - 1] : b} = [1,2,3,4,5,6,7,8,9,0]; console.log(a, b)
Q: Get first and last elements in array, ES6 way let array = [1,2,3,4,5,6,7,8,9,0] Documentation is something like this [first, ...rest] = array will output 1 and the rest of array Now is there a way to take only the first and the last element 1 & 0 with Destructuring ex: [first, ...middle, last] = array I know how to take the first and last elements the other way but I was wondering if it is possible with es6 A: The rest parameter can only use at the end not anywhere else in the destructuring so it won't work as you expected. Instead, you can destructor certain properties(an array is also an object in JS), for example, 0 for first and index of the last element for last. let array = [1,2,3,4,5,6,7,8,9,0] let {0 : a ,[array.length - 1] : b} = array; console.log(a, b) Or its better way to extract length as an another variable and get last value based on that ( suggested by @Bergi) , it would work even there is no variable which refers the array. let {0 : a ,length : l, [l - 1] : b} = [1,2,3,4,5,6,7,8,9,0]; console.log(a, b)
stackoverflow
{ "language": "en", "length": 196, "provenance": "stackexchange_0000F.jsonl.gz:860171", "question_score": "35", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527643" }
638a80f65435c6aac3e2195eb45d465b2a2c361f
Stackoverflow Stackexchange Q: Custom metrics for EMR Spark application that runs a schedule using AWS Data Pipelines For our project, we have a spark application running on an EMR cluster - which runs on schedule using AWS Data Pipelines. (i.e the instances are shut down after the application/job finishes). I am currently looking at alternatives to produce custom metrics (i.e. to publish business metrics) The best solution I found so far is to use AWS Cloudwatch API's to publish these metrics. Are there any other alternatives to achieve the same? Thanks
Q: Custom metrics for EMR Spark application that runs a schedule using AWS Data Pipelines For our project, we have a spark application running on an EMR cluster - which runs on schedule using AWS Data Pipelines. (i.e the instances are shut down after the application/job finishes). I am currently looking at alternatives to produce custom metrics (i.e. to publish business metrics) The best solution I found so far is to use AWS Cloudwatch API's to publish these metrics. Are there any other alternatives to achieve the same? Thanks
stackoverflow
{ "language": "en", "length": 89, "provenance": "stackexchange_0000F.jsonl.gz:860200", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527736" }
6e099a2c93777975800d365c15b1da7ce86c68c3
Stackoverflow Stackexchange Q: Is the name of an Elastic Beanstalk environment available as an environment property? Knowing how to set custom environment variables on AWS Elastic Beanstalk, I wonder if there is a default environment property for the name of the environment. If there is, where is this described in the documentation? For example, suppose my EB environment is named "my_env," is there some kind of default environment property like e.g. "AWS_EB_ENV_NAME" that I can access to obtain this name? A: You can either set the environment variable yourself during the instance creation via an ebextension or the Web Console during environment creation. Described here: How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)? Or you can use a combination of requests to the Meta-Data server, which sits between your EC2 instance and the netwrok, and the AWS EB API; as described in: https://serverfault.com/questions/630075/is-it-possible-to-get-metadata-about-the-elastic-beanstalk-environment-from-the
Q: Is the name of an Elastic Beanstalk environment available as an environment property? Knowing how to set custom environment variables on AWS Elastic Beanstalk, I wonder if there is a default environment property for the name of the environment. If there is, where is this described in the documentation? For example, suppose my EB environment is named "my_env," is there some kind of default environment property like e.g. "AWS_EB_ENV_NAME" that I can access to obtain this name? A: You can either set the environment variable yourself during the instance creation via an ebextension or the Web Console during environment creation. Described here: How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)? Or you can use a combination of requests to the Meta-Data server, which sits between your EC2 instance and the netwrok, and the AWS EB API; as described in: https://serverfault.com/questions/630075/is-it-possible-to-get-metadata-about-the-elastic-beanstalk-environment-from-the
stackoverflow
{ "language": "en", "length": 146, "provenance": "stackexchange_0000F.jsonl.gz:860253", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527892" }
d664d617607b521d25f3721eb36a24d562a21895
Stackoverflow Stackexchange Q: openpyxl How to set cell to 'ignore error'? I'm setting a row of cells to a formula like: TweleveWeekRollingFormula = "=AVERAGE(B" + str(ExcelRowNumber + 20) + ":B" + str(ExcelRowNumber + 31) + ")" This works fine but causes excel to display a small green triangle in the cells top left corner: I can clear the error manually in excel by clicking on the cells popup menu (!) and selecting 'Ignore Error'. Is there a way to do this with openpyxl? (and not display the green triangle) A: Those numbers are strings. The way you are going to make this go away as well as ensure that you won't have calculation errors is by setting those number values to floats (since you want decimals). They are being entered in as strings. You don't want that. The only reason why it works fine now is because Excel converts those string to numbers for the formula successfully.
Q: openpyxl How to set cell to 'ignore error'? I'm setting a row of cells to a formula like: TweleveWeekRollingFormula = "=AVERAGE(B" + str(ExcelRowNumber + 20) + ":B" + str(ExcelRowNumber + 31) + ")" This works fine but causes excel to display a small green triangle in the cells top left corner: I can clear the error manually in excel by clicking on the cells popup menu (!) and selecting 'Ignore Error'. Is there a way to do this with openpyxl? (and not display the green triangle) A: Those numbers are strings. The way you are going to make this go away as well as ensure that you won't have calculation errors is by setting those number values to floats (since you want decimals). They are being entered in as strings. You don't want that. The only reason why it works fine now is because Excel converts those string to numbers for the formula successfully.
stackoverflow
{ "language": "en", "length": 155, "provenance": "stackexchange_0000F.jsonl.gz:860284", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44527990" }
3ba1de12d1c0e89f69b0132dcc9e4a5998f877c6
Stackoverflow Stackexchange Q: WPF TextBox disable break on special characters I have a TextBox defined like this: <TextBox Text="{Binding License.LicenseKey}" HorizontalContentAlignment="Left" TextAlignment="Justify" Width="350" Height="100" Margin="10,0,0,0" TextWrapping="Wrap" /> Currently a long string will break on special characters: I would prefer it to simply break on any character once it reaches the end of the TextBox like this: Is there a way to disable the stock breaking that a TextBox uses? I have tried various options for TextAlignment and HorizontalContentAlignment to no avail. A: You could add a zero-width space (U+200B) after each character which would allow a break at any position. You would need to define a property in your view model and bind to it, and have the getter do this transformation so that it is displayed with line breaks, e.g.: string SomeProperty { get { return String.Join(string.Empty, License.LicenseKey.Zip(new string('\u200B', License.LicenseKey.Length), (x, y) => x.ToString() + y)); } set { Model.LicenseKey = value?.Replace("\u200B", string.Empty); } } However I don't know what would happen to the cursor position.
Q: WPF TextBox disable break on special characters I have a TextBox defined like this: <TextBox Text="{Binding License.LicenseKey}" HorizontalContentAlignment="Left" TextAlignment="Justify" Width="350" Height="100" Margin="10,0,0,0" TextWrapping="Wrap" /> Currently a long string will break on special characters: I would prefer it to simply break on any character once it reaches the end of the TextBox like this: Is there a way to disable the stock breaking that a TextBox uses? I have tried various options for TextAlignment and HorizontalContentAlignment to no avail. A: You could add a zero-width space (U+200B) after each character which would allow a break at any position. You would need to define a property in your view model and bind to it, and have the getter do this transformation so that it is displayed with line breaks, e.g.: string SomeProperty { get { return String.Join(string.Empty, License.LicenseKey.Zip(new string('\u200B', License.LicenseKey.Length), (x, y) => x.ToString() + y)); } set { Model.LicenseKey = value?.Replace("\u200B", string.Empty); } } However I don't know what would happen to the cursor position. A: This is exceptionally messy due to limited options on the TextBox's TextWrapping property. See this forum post to explain the U+200B comment below your question. However, that doesn't work for you because you DON'T want it to break. And if there's a library of standard non-breaking versions of characters, I've been unable to dig it up. The only way I see this working is to use a fixed-width font, keeping track of how many characters are entered alongside the capacity of the box, and adding your own newline when that capacity is reached.
stackoverflow
{ "language": "en", "length": 259, "provenance": "stackexchange_0000F.jsonl.gz:860328", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528118" }
344640ecccf75f387bb1c81b3f4c930254ae9953
Stackoverflow Stackexchange Q: Using table() in dplyr chain Can someone explain why table()doesn't work inside a chain of dplyr-magrittr piped operations? Here's a simple reprex: tibble( type = c("Fast", "Slow", "Fast", "Fast", "Slow"), colour = c("Blue", "Blue", "Red", "Red", "Red") ) %>% table(.$type, .$colour) Error in sort.list(y) : 'x' must be atomic for 'sort.list' Have you called 'sort' on a list? But this works of course: df <- tibble( type = c("Fast", "Slow", "Fast", "Fast", "Slow"), colour = c("Blue", "Blue", "Red", "Red", "Red") ) table(df$type, df$colour) Blue Red Fast 1 2 Slow 1 1 A: I've taken to using with(table(...)) like this: tibble(type = c("Fast", "Slow", "Fast", "Fast", "Slow"), colour = c("Blue", "Blue", "Red", "Red", "Red")) %>% with(table(type, colour)) And similar to the way we might read %>% as "and then" I would read that as "and then with that data make this table".
Q: Using table() in dplyr chain Can someone explain why table()doesn't work inside a chain of dplyr-magrittr piped operations? Here's a simple reprex: tibble( type = c("Fast", "Slow", "Fast", "Fast", "Slow"), colour = c("Blue", "Blue", "Red", "Red", "Red") ) %>% table(.$type, .$colour) Error in sort.list(y) : 'x' must be atomic for 'sort.list' Have you called 'sort' on a list? But this works of course: df <- tibble( type = c("Fast", "Slow", "Fast", "Fast", "Slow"), colour = c("Blue", "Blue", "Red", "Red", "Red") ) table(df$type, df$colour) Blue Red Fast 1 2 Slow 1 1 A: I've taken to using with(table(...)) like this: tibble(type = c("Fast", "Slow", "Fast", "Fast", "Slow"), colour = c("Blue", "Blue", "Red", "Red", "Red")) %>% with(table(type, colour)) And similar to the way we might read %>% as "and then" I would read that as "and then with that data make this table". A: This behavior is by design: https://github.com/tidyverse/magrittr/blob/00a1fe3305a4914d7c9714fba78fd5f03f70f51e/README.md#re-using-the-placeholder-for-attributes Since you don't have a . on it's own, the tibble is still being passed as the first parameter so it's really more like ... %>% table(., .$type, .$colour) The official magrittr work-around is to use curly braces ... %>% {table(.$type, .$colour)} A: The %>% operator in dplyr is actually imported from magrittr. With magrittr, we can also use the %$% operator, which exposes the names from the previous expression: library(tidyverse) library(magrittr) tibble( type = c("Fast", "Slow", "Fast", "Fast", "Slow"), colour = c("Blue", "Blue", "Red", "Red", "Red") ) %$% table(type, colour) Output: colour type Blue Red Fast 1 2 Slow 1 1
stackoverflow
{ "language": "en", "length": 251, "provenance": "stackexchange_0000F.jsonl.gz:860345", "question_score": "11", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528173" }
03d536638ea21e5b9fbca52beadc1f6cfa432b02
Stackoverflow Stackexchange Q: Scripted field to count array length I have the following document: { "likes": { "data": [ { "name": "a" }, { "name": "b" }, { "name": "c" } ] } } I'm trying to run an update_by_query that will add a field called 'like_count' with the number of array items inside likes.data It's important to know that not all of my documents have the likes.data object. I've tried this: POST /facebook/post/_update_by_query { "script": { "inline": "if (ctx._source.likes != '') { ctx._source.like_count = ctx._source.likes.data.length }", "lang": "painless" } } But getting this error message: { "type": "script_exception", "reason": "runtime error", "script_stack": [ "ctx._source.like_count = ctx._source.likes.data.length }", " ^---- HERE" ], "script": "if (ctx._source.likes != '') { ctx._source.like_count = ctx._source.likes.data.length }", "lang": "painless" } A: Try ctx._source['likes.data.name'].length According to https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html, the object array in ES is flattened to { "likes.data.name" :["a", "b", "c"] } The object array datatype we thought is Nest datatype.
Q: Scripted field to count array length I have the following document: { "likes": { "data": [ { "name": "a" }, { "name": "b" }, { "name": "c" } ] } } I'm trying to run an update_by_query that will add a field called 'like_count' with the number of array items inside likes.data It's important to know that not all of my documents have the likes.data object. I've tried this: POST /facebook/post/_update_by_query { "script": { "inline": "if (ctx._source.likes != '') { ctx._source.like_count = ctx._source.likes.data.length }", "lang": "painless" } } But getting this error message: { "type": "script_exception", "reason": "runtime error", "script_stack": [ "ctx._source.like_count = ctx._source.likes.data.length }", " ^---- HERE" ], "script": "if (ctx._source.likes != '') { ctx._source.like_count = ctx._source.likes.data.length }", "lang": "painless" } A: Try ctx._source['likes.data.name'].length According to https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html, the object array in ES is flattened to { "likes.data.name" :["a", "b", "c"] } The object array datatype we thought is Nest datatype. A: Try this ctx._source['likes']['data'].size()
stackoverflow
{ "language": "en", "length": 156, "provenance": "stackexchange_0000F.jsonl.gz:860372", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528270" }
a3e6138d77608c322858e931c8b78d085bb81b62
Stackoverflow Stackexchange Q: Performance impact of calling CreateIfNotExistsAsync() on a Azure queue Should I call CreateIfNotExistsAsync() before every read/write on Azure queue? I know it results in a REST call, but does it do any IO on the queue? I am using the .Net library for Azure Queue (if this info is important). A: All that method does is try to create the queue and catches the AlreadyExists error, which you could just as easily replicate yourself by catching the 404 when you try and access the queue. There is bound to be some performance impact. More importantly, it increases your costs: from the archive of Understanding Windows Azure Storage Billing – Bandwidth, Transactions, and Capacity [MSDN] We have seen applications that perform a CreateIfNotExist [sic] on a Queue before every put message into that queue. This results in two separate requests to the storage system for every message they want to enqueue, with the create queue failing. Make sure you only create your Blob Containers, Tables and Queues at the start of their lifetime to avoid these extra transaction costs.
Q: Performance impact of calling CreateIfNotExistsAsync() on a Azure queue Should I call CreateIfNotExistsAsync() before every read/write on Azure queue? I know it results in a REST call, but does it do any IO on the queue? I am using the .Net library for Azure Queue (if this info is important). A: All that method does is try to create the queue and catches the AlreadyExists error, which you could just as easily replicate yourself by catching the 404 when you try and access the queue. There is bound to be some performance impact. More importantly, it increases your costs: from the archive of Understanding Windows Azure Storage Billing – Bandwidth, Transactions, and Capacity [MSDN] We have seen applications that perform a CreateIfNotExist [sic] on a Queue before every put message into that queue. This results in two separate requests to the storage system for every message they want to enqueue, with the create queue failing. Make sure you only create your Blob Containers, Tables and Queues at the start of their lifetime to avoid these extra transaction costs.
stackoverflow
{ "language": "en", "length": 179, "provenance": "stackexchange_0000F.jsonl.gz:860387", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528304" }
f6be30e273a07ca22e35a338ecbe3deec572df28
Stackoverflow Stackexchange Q: git status for a past commit? how to do a git status to view changed files in a past commit? NOTE: this question might have already been asked, but phrased much differently. however, I am astounded to see that searching for the following expression on Google yields no useful results: git status for past commit. A: git status is the wrong command here. If you want to see what a previous commit did you should use git show <commit>.
Q: git status for a past commit? how to do a git status to view changed files in a past commit? NOTE: this question might have already been asked, but phrased much differently. however, I am astounded to see that searching for the following expression on Google yields no useful results: git status for past commit. A: git status is the wrong command here. If you want to see what a previous commit did you should use git show <commit>. A: git show --name-status <commit> A: I found: git show --stat --oneline b8351c4 Where b8351c4 is the regarded commit.
stackoverflow
{ "language": "en", "length": 99, "provenance": "stackexchange_0000F.jsonl.gz:860405", "question_score": "21", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528357" }
105b5b898856a83dcbd89a10bc106fd12629b3b4
Stackoverflow Stackexchange Q: Time Profiler In Xcode Missing Record Settings, Display Settings Im using Instruments 8.3.2 and the Record Settings and Display Settings tab are missing in the inspector window. How do I get those tabs back? Please see the attached screenshot. The normal inspector window looks like this (See arrow 5): A: The majority of these settings have moved to the bottom of the Instruments window. Tap on "Call Tree" to see a modal with the missing settings:
Q: Time Profiler In Xcode Missing Record Settings, Display Settings Im using Instruments 8.3.2 and the Record Settings and Display Settings tab are missing in the inspector window. How do I get those tabs back? Please see the attached screenshot. The normal inspector window looks like this (See arrow 5): A: The majority of these settings have moved to the bottom of the Instruments window. Tap on "Call Tree" to see a modal with the missing settings: A: No With Xcode 9.3 & Instruments 9.3 The "Recording Options" moved again. Now it is in the app menu at File > Recording Options…. Or you can use shortcut: CmdOptr Or long press the Record button, it will appear options contains Recording options You can see the menu item in the picture below:
stackoverflow
{ "language": "en", "length": 131, "provenance": "stackexchange_0000F.jsonl.gz:860439", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528438" }
ae7793a221e3db1f8e13af4eeefcecabe4239706
Stackoverflow Stackexchange Q: after pip successful installed: ModuleNotFoundError I am trying to install the SimPy module so that I can use it in IDLE. However, everytime I try to import in IDLE, I got an error. I already tried reinstalling Python and Pip and tried to modify the location of the apps. SimPy can be found in the directory of Python 2.7. I'm using python 3.6.1. After I correctly installed simpy in the terminal: pip install simpy Requirement already satisfied: simpy in /Library/Python/2.7/site-packages When I put into IDLE: Import Simpy I got the error: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> import simpy ModuleNotFoundError: No module named 'simpy' How can I solve this? A: When this happened to me (on macOS), the problem turned out to be that the python installation I specified at the top of my script.py was not the same python installation that conda/pip were using on the command line. To get the command line and my script to match up, I changed the header in my script.py to just use: #!python Then when I ran ./script.py on the command line, everything finally worked.
Q: after pip successful installed: ModuleNotFoundError I am trying to install the SimPy module so that I can use it in IDLE. However, everytime I try to import in IDLE, I got an error. I already tried reinstalling Python and Pip and tried to modify the location of the apps. SimPy can be found in the directory of Python 2.7. I'm using python 3.6.1. After I correctly installed simpy in the terminal: pip install simpy Requirement already satisfied: simpy in /Library/Python/2.7/site-packages When I put into IDLE: Import Simpy I got the error: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> import simpy ModuleNotFoundError: No module named 'simpy' How can I solve this? A: When this happened to me (on macOS), the problem turned out to be that the python installation I specified at the top of my script.py was not the same python installation that conda/pip were using on the command line. To get the command line and my script to match up, I changed the header in my script.py to just use: #!python Then when I ran ./script.py on the command line, everything finally worked. A: I had same problem (on Windows) and the root cause in my case was ANTIVIRUS software! It has "Auto-Containment" feature, that wraps running process with some kind of a virtual machine. Symptoms are the same: pip install <module> works fine in one cmd line window and import <module> fails when executed from another process. A: What worked for me is that adding the module location to the sys.path import sys sys.path.insert(0, r"/path/to/your/module") A: Since you are using python 3.6.1, you may need to specify the type of python you want to install simpy for. Try running pip3 install simpy to install the simpy module to your python3 library. A: Wherever you're running your code, try this import sys sys.path sys.executable It might be possible that you're running python in one environment and the module is installed in another environment. A: This command works for me for the same issue. python -m pip install “your library” A: I wrote a package by myself and I thought the __init__.py could be ignored, then I encountered this issue. when I added an empty __init__.py to my package, this issue was fixed. A: Do not have a file called simpy.py in the current working directory, as python will try to load this file instead of the module that you want. This may cause the problem described in the title of this question.
stackoverflow
{ "language": "en", "length": 418, "provenance": "stackexchange_0000F.jsonl.gz:860504", "question_score": "24", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528638" }
5f573cdb4283f1c32f953bf6a6c000725e73ff9e
Stackoverflow Stackexchange Q: Optionally embedding framework in Xcode by build settings How to optionally embed a framework by build settings in Xcode? For example, I want to embed a framework only in debug build. How to do this?
Q: Optionally embedding framework in Xcode by build settings How to optionally embed a framework by build settings in Xcode? For example, I want to embed a framework only in debug build. How to do this?
stackoverflow
{ "language": "en", "length": 36, "provenance": "stackexchange_0000F.jsonl.gz:860525", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528704" }
de709b852d1b1ee5b4cd0309715aa686f7150df7
Stackoverflow Stackexchange Q: Argument 1 passed to Symfony\Component\Form\FormRenderer::renderBlock() must be an instance of ...\FormView, instance of ...\Form given Whole error is missiong namespace Symfony\Component\Form which is replaced with 3 dots, due to title maximum characters. So, I am following the steps, that are presented in the docs and I'm unable to find source of the error I'm getting. If anyone could help, I'd greatly appreciate it. Here is the method from my AuthController /** * @Route("/register", name="registrationPage") */ public function showRegistrationPage(Request $request) { return $this->render('auth/register.html.twig', [ 'register_form' => $this->createForm(RegisterType::class, (new UserInformation())) ]); } And here is the method, where I declare the form public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('firstname', TextType::class, ['attr' => ['class' => 'form-control']]) ->add('secondname', TextType::class, ['attr' => ['class' => 'form-control']]) ->add('email', EmailType::class, ['attr' => ['class' => 'form-control']]) ->add('password', PasswordType::class, ['attr' => ['class' => 'form-control']]) ->add('password_confirmation', PasswordType::class, [ 'label' => 'Confirm Password', 'attr' => ['class' => 'form-control'], 'mapped' =>false ]) ->add('Register', SubmitType::class, ['attr' => ['class' => 'btn btn-primary']]); } A: the missing part was createView() method /** * @Route("/register", name="registrationPage") */ public function showRegistrationPage(Request $request) { return $this->render('auth/register.html.twig', [ 'register_form' => $this->createForm(RegisterType::class, (new UserInformation()))->createView() ]); }
Q: Argument 1 passed to Symfony\Component\Form\FormRenderer::renderBlock() must be an instance of ...\FormView, instance of ...\Form given Whole error is missiong namespace Symfony\Component\Form which is replaced with 3 dots, due to title maximum characters. So, I am following the steps, that are presented in the docs and I'm unable to find source of the error I'm getting. If anyone could help, I'd greatly appreciate it. Here is the method from my AuthController /** * @Route("/register", name="registrationPage") */ public function showRegistrationPage(Request $request) { return $this->render('auth/register.html.twig', [ 'register_form' => $this->createForm(RegisterType::class, (new UserInformation())) ]); } And here is the method, where I declare the form public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('firstname', TextType::class, ['attr' => ['class' => 'form-control']]) ->add('secondname', TextType::class, ['attr' => ['class' => 'form-control']]) ->add('email', EmailType::class, ['attr' => ['class' => 'form-control']]) ->add('password', PasswordType::class, ['attr' => ['class' => 'form-control']]) ->add('password_confirmation', PasswordType::class, [ 'label' => 'Confirm Password', 'attr' => ['class' => 'form-control'], 'mapped' =>false ]) ->add('Register', SubmitType::class, ['attr' => ['class' => 'btn btn-primary']]); } A: the missing part was createView() method /** * @Route("/register", name="registrationPage") */ public function showRegistrationPage(Request $request) { return $this->render('auth/register.html.twig', [ 'register_form' => $this->createForm(RegisterType::class, (new UserInformation()))->createView() ]); } A: /** * @Route("/register", name="registrationPage") */ public function showRegistrationPage(Request $request) { $form = $this->createForm(RegisterType::class, (new UserInformation())); return $this->render('auth/register.html.twig', [ 'register_form' => $form->createView() ]); } http://symfony.com/doc/current/forms.html#building-the-form
stackoverflow
{ "language": "en", "length": 213, "provenance": "stackexchange_0000F.jsonl.gz:860548", "question_score": "23", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528757" }
006694ff7d5d52034be283b99bd0db056c56ad7a
Stackoverflow Stackexchange Q: Oracle PL/SQL: How to pretty print a large number I have a column with type NUMBER(38) and when I print numbers in PL/SQL they show similar to: MY_ID ---------- 1.9351E+14 1.9351E+14 1.9351E+14 What can I do to print these as the full number? A: It depends. What do you mean by "print"? Also, what is the application through which you interact with the database? The best place to format the output from your queries is your front-end or reporting application. The illustration below, all done in SQL*Plus, shows how to format a numeric column so that numbers are "printed" in natural format, instead of scientific notation. Note that the COLUMN command is a SQL*Plus command which has absolutely nothing to do with SQL (in particular, with SQL functions such as to_char()). SQL> select 1.9351E+14 as num from dual; NUM ---------- 1.9351E+14 SQL> column num format 999999999999999 SQL> select 1.9351E+14 as num from dual; NUM ---------------- 193510000000000
Q: Oracle PL/SQL: How to pretty print a large number I have a column with type NUMBER(38) and when I print numbers in PL/SQL they show similar to: MY_ID ---------- 1.9351E+14 1.9351E+14 1.9351E+14 What can I do to print these as the full number? A: It depends. What do you mean by "print"? Also, what is the application through which you interact with the database? The best place to format the output from your queries is your front-end or reporting application. The illustration below, all done in SQL*Plus, shows how to format a numeric column so that numbers are "printed" in natural format, instead of scientific notation. Note that the COLUMN command is a SQL*Plus command which has absolutely nothing to do with SQL (in particular, with SQL functions such as to_char()). SQL> select 1.9351E+14 as num from dual; NUM ---------- 1.9351E+14 SQL> column num format 999999999999999 SQL> select 1.9351E+14 as num from dual; NUM ---------------- 193510000000000 A: You need set numwidth 30 SQL> select 1024000*1024000*1024000 from dual; 1024000*1024000*1024000 ----------------------- 1.0737E+18 SQL> show numwidth numwidth 10 SQL> set numwidth 30 SQL> select 1024000*1024000*1024000 from dual; 1024000*1024000*1024000 ------------------------------ 1073741824000000000 SQL> A: Check out this link which provides extensive details on number formatting. You can use the to_char function as below to pretty print (with 1000s separator) SQL> select TO_CHAR( MY_ID, '999,999,999') from MY_TABLE;
stackoverflow
{ "language": "en", "length": 222, "provenance": "stackexchange_0000F.jsonl.gz:860615", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44528920" }
4274df1dccb8f1b1ddb9b68e45f5b24b5a01464c
Stackoverflow Stackexchange Q: Is there an easy way to buffer io.ReaderAt and io.WriterAt? I'm implementing a custom sftp server for a project at work that will use an AWS compatible system as the backend. Based on what I am seeing, I'm thinking I should implement the sftp.Handlers interface using the s3.Uploader and s3.Downloader or s3.PutObject and s3.GetObject. I've used io.PipeReader and io.PipeWriter before to pipe an io.Writer to an io.Reader but in this case, I need to do something like: * *Get: io.ReaderAt <- ??? <- io.Reader *Put: io.WriterAt -> ??? -> io.Reader I'm guessing ??? will be different in both cases but they both seem like they'd be a type of pipe where we hold data until it's available for the other end. Does something like this exist or do I need to implement it myself? Any suggestions on implementing it? A: I'm personally making use of https://github.com/avvmoto/buf-readerat which specifically buffers ReadAt. Saved me a lot of pain.
Q: Is there an easy way to buffer io.ReaderAt and io.WriterAt? I'm implementing a custom sftp server for a project at work that will use an AWS compatible system as the backend. Based on what I am seeing, I'm thinking I should implement the sftp.Handlers interface using the s3.Uploader and s3.Downloader or s3.PutObject and s3.GetObject. I've used io.PipeReader and io.PipeWriter before to pipe an io.Writer to an io.Reader but in this case, I need to do something like: * *Get: io.ReaderAt <- ??? <- io.Reader *Put: io.WriterAt -> ??? -> io.Reader I'm guessing ??? will be different in both cases but they both seem like they'd be a type of pipe where we hold data until it's available for the other end. Does something like this exist or do I need to implement it myself? Any suggestions on implementing it? A: I'm personally making use of https://github.com/avvmoto/buf-readerat which specifically buffers ReadAt. Saved me a lot of pain.
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:860651", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529031" }
962df4b60f8bf5cda892dc7f5f73b5bb511aee67
Stackoverflow Stackexchange Q: Google Tag Manager container isn't updating I'm running into a weird issue where my changes in Google Tag Manager aren't being reflected on my website. For example, I update my Google Analytics tag to fire a "Universal Analytics" tag but my website is still firing the "Classic Google Analytics". If I preview my container, then everything works as expected but if I view the tags without preview mode, I don't see any of the latest changes. The tags are definitely coming from GTM as you can see in the screenshot below: https://imgur.com/a/Px3GW (couldn't upload to Stackoverflow) Has anyone experienced this issue? Could caching be affecting the GTM changes?
Q: Google Tag Manager container isn't updating I'm running into a weird issue where my changes in Google Tag Manager aren't being reflected on my website. For example, I update my Google Analytics tag to fire a "Universal Analytics" tag but my website is still firing the "Classic Google Analytics". If I preview my container, then everything works as expected but if I view the tags without preview mode, I don't see any of the latest changes. The tags are definitely coming from GTM as you can see in the screenshot below: https://imgur.com/a/Px3GW (couldn't upload to Stackoverflow) Has anyone experienced this issue? Could caching be affecting the GTM changes?
stackoverflow
{ "language": "en", "length": 109, "provenance": "stackexchange_0000F.jsonl.gz:860670", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529087" }
550d8e357bfe923c394fc54c7e5d6e9512057427
Stackoverflow Stackexchange Q: Empty diff window with Apply Patch in TortoiseSVN Steps: * *Use TortoiseSVN's context menu to select "Create Patch" *On another machine do the same but select "Apply Patch" and select the file generated in step 1. *A blank merge window is opened. It looks like this: The patch file is valid and I can use unix patch to apply it successfully (with some line-ending tinkering). I'm on Windows 10 and TortoiseSVN/TortoiseMerge 1.9.5 A: The problem was that TortoiseMerge was maximized. There's a floating window on the left. Unmaximize the TortoiseMerge window and you can see the file selector window. You can select files in that window to see them in the diff view and there's buttons for applying the patch. It should look like this:
Q: Empty diff window with Apply Patch in TortoiseSVN Steps: * *Use TortoiseSVN's context menu to select "Create Patch" *On another machine do the same but select "Apply Patch" and select the file generated in step 1. *A blank merge window is opened. It looks like this: The patch file is valid and I can use unix patch to apply it successfully (with some line-ending tinkering). I'm on Windows 10 and TortoiseSVN/TortoiseMerge 1.9.5 A: The problem was that TortoiseMerge was maximized. There's a floating window on the left. Unmaximize the TortoiseMerge window and you can see the file selector window. You can select files in that window to see them in the diff view and there's buttons for applying the patch. It should look like this: A: I had the same problem and had to select TortoiseMerge for Settings > Diff Viewer > Merge Tool. I previously configured an external editor here and then it did not show this patch window but only an empty merge tool. Maybe this feature does not work very well with external editors.
stackoverflow
{ "language": "en", "length": 178, "provenance": "stackexchange_0000F.jsonl.gz:860726", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529285" }
ee79c824b09dfef0a966e8e4dca8185276869bc1
Stackoverflow Stackexchange Q: Is libcheck (.net assembly comparison tool from Microsoft) discontinued? I am trying to download libcheck. But link on Microsoft.com is broken neither it is searchable on Microsoft.com Is this tool discontinued? If yes any other alternative to this tool from Microsoft?
Q: Is libcheck (.net assembly comparison tool from Microsoft) discontinued? I am trying to download libcheck. But link on Microsoft.com is broken neither it is searchable on Microsoft.com Is this tool discontinued? If yes any other alternative to this tool from Microsoft?
stackoverflow
{ "language": "en", "length": 42, "provenance": "stackexchange_0000F.jsonl.gz:860743", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529337" }
bdfe090f40c30d6d0c0d2af899a1b37a32152050
Stackoverflow Stackexchange Q: Kotlin NullPointerException occurrence I'm new to kotlin, I'm confused the situation at below when I starting to Null Safety. There's some data inconsistency with regard to initialization (an uninitialized this available in a constructor is used somewhere). Could anyone describe the situation more in detailed? A: Example adapted from a Kotlin discussion on exactly this: class Foo { val c: String // Non-nullable init { bar() c = "" // Initialised for the first time here } fun bar() { println(c.length) // Oh dear } } fun main(args: Array<String>) { Foo() }
Q: Kotlin NullPointerException occurrence I'm new to kotlin, I'm confused the situation at below when I starting to Null Safety. There's some data inconsistency with regard to initialization (an uninitialized this available in a constructor is used somewhere). Could anyone describe the situation more in detailed? A: Example adapted from a Kotlin discussion on exactly this: class Foo { val c: String // Non-nullable init { bar() c = "" // Initialised for the first time here } fun bar() { println(c.length) // Oh dear } } fun main(args: Array<String>) { Foo() }
stackoverflow
{ "language": "en", "length": 93, "provenance": "stackexchange_0000F.jsonl.gz:860759", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529375" }
fdc3992b9122ba1e15e380c7cbf4c4f12874a121
Stackoverflow Stackexchange Q: How can I define variable array on the laravel blade view? I try like this : <div class="media-body"> @foreach($categories as $category) @php $category[] = $category->name @endphp @endforeach {{ implode(",", $category) }} </div> If the code executed, there exist error : undefine variable category How can I solve it? A: You can simply use Laravel Collection {{ $categories->pluck('name')->implode(', ') }} Or if you wanna do this in foreach then @php ($names = []) @foreach ($categories as $category) @php ($names[] = $category->name) @endforeach {{ implode(', ', $names) }}
Q: How can I define variable array on the laravel blade view? I try like this : <div class="media-body"> @foreach($categories as $category) @php $category[] = $category->name @endphp @endforeach {{ implode(",", $category) }} </div> If the code executed, there exist error : undefine variable category How can I solve it? A: You can simply use Laravel Collection {{ $categories->pluck('name')->implode(', ') }} Or if you wanna do this in foreach then @php ($names = []) @foreach ($categories as $category) @php ($names[] = $category->name) @endforeach {{ implode(', ', $names) }} A: You have to declare an array within <?php ... ?> block and then use the same in a {{blade}} block.
stackoverflow
{ "language": "en", "length": 108, "provenance": "stackexchange_0000F.jsonl.gz:860834", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529690" }
5062692440eb1f17e40f93db76b1acbd507d6d1c
Stackoverflow Stackexchange Q: Space character not being matched by regex in .htaccess I want to block any request that contains an ID that contains any non-numeric character using mod_rewite, or an empty ID. I have the following rule in my .htaccess file: RewriteCond %{QUERY_STRING} ID=(\d*[^\d&]+\d*)*(&|$) RewriteRule .* - [F] Which is working except for requests that contain a space character eg. GET /page.php?ID=5 5 HTTP/1.1 The space character between the two 5s is successfully matched by [^\d&]+ when I use various test suites (such as https://regex101.com/) but this type of request is still getting through. What do I need to change? (yes, incorrect user input is handled in my PHP so it doesn't matter if this gets through) A: Perhaps this will work for you: RewriteCond %{QUERY_STRING} !(?:^|&)ID=\d+(?:&|$) RewriteRule ^ - [F] And if you only want it to affect requests that do have an ID parameter in the query string (so requests with no ID are allowed): RewriteCond %{QUERY_STRING} (?:^|&)(?:\%(?:20|09))*ID(?:\%(?:20|09))*= [NC] RewriteCond %{QUERY_STRING} !(?:^|&)ID=\d+(?:&|$) RewriteRule ^ - [F] I also added [NC] (non-case-sensitive) so that iD etc. will also be covered by this.
Q: Space character not being matched by regex in .htaccess I want to block any request that contains an ID that contains any non-numeric character using mod_rewite, or an empty ID. I have the following rule in my .htaccess file: RewriteCond %{QUERY_STRING} ID=(\d*[^\d&]+\d*)*(&|$) RewriteRule .* - [F] Which is working except for requests that contain a space character eg. GET /page.php?ID=5 5 HTTP/1.1 The space character between the two 5s is successfully matched by [^\d&]+ when I use various test suites (such as https://regex101.com/) but this type of request is still getting through. What do I need to change? (yes, incorrect user input is handled in my PHP so it doesn't matter if this gets through) A: Perhaps this will work for you: RewriteCond %{QUERY_STRING} !(?:^|&)ID=\d+(?:&|$) RewriteRule ^ - [F] And if you only want it to affect requests that do have an ID parameter in the query string (so requests with no ID are allowed): RewriteCond %{QUERY_STRING} (?:^|&)(?:\%(?:20|09))*ID(?:\%(?:20|09))*= [NC] RewriteCond %{QUERY_STRING} !(?:^|&)ID=\d+(?:&|$) RewriteRule ^ - [F] I also added [NC] (non-case-sensitive) so that iD etc. will also be covered by this. A: @Andreykul spaces are encoded for requests from regular browsers yes, but these are requests probing for vulnerabilities. Possibly vulnerabilities in the webserver itself, rather than your web application... (?) GET /page.php?ID=5 5 HTTP/1.1 The "problem" with this is that it's an invalid/malformed request. For this to be valid, it must be URL encoded. The (literal) space is a special character in the first line of the request and acts as a delimiter between the "Method", "Request-URI" and "HTTP-Version" parts of the header. Since the request is invalid, it would be reasonable to expect it to already be blocked at the server level with a 400 Bad Request. If the server is not blocking the request then you are likely to experience unexpected behaviour. Which is possibly what you are seeing here... For such a request, if you examine the QUERY_STRING server variable you will see that it doesn't contain the space or the second 5. The value is truncated before the literal space, it simply contains ID=5. (Consequently, this is also what PHP sees.) So, your regex (CondPattern) never matches. However, the complete request URI is present in the first line of the request (as you posted above) - this is available in the THE_REQUEST Apache server variable. It will probably be preferable to simply block any request that contains literal spaces (which is invalid anyway), rather than searching specifically for requests containing the ID parameter. For example: RewriteCond %{THE_REQUEST} \s.*\s.*\s RewriteRule ^ - [R=400] This checks for any whitespace contained between the outer space delimiters. Reference: https://www.w3.org/Protocols/rfc2616/rfc2616-sec5.html
stackoverflow
{ "language": "en", "length": 441, "provenance": "stackexchange_0000F.jsonl.gz:860875", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529821" }
f6f609eb6cb0f375665d7fa3b1e2128f8515868a
Stackoverflow Stackexchange Q: Converting UIImage to MLMultiArray for Keras Model In Python, I trained an image classification model with keras to receive input as a [224, 224, 3] array and output a prediction (1 or 0). When I load the save the model and load it into xcode, it states that the input has to be in MLMultiArray format. Is there a way for me to convert a UIImage into MLMultiArray format? Or is there a way for me change my keras model to accept CVPixelBuffer type objects as an input. A: When you convert the caffe model to MLModel, you need to add this line: image_input_names = 'data' Take my own transfer script as an example, the script should be like this: import coremltools coreml_model = coremltools.converters.caffe.convert(('gender_net.caffemodel', 'deploy_gender.prototxt'), image_input_names = 'data', class_labels = 'genderLabel.txt') coreml_model.save('GenderMLModel.mlmodel') And then your MLModel's input data will be CVPixelBufferRef instead of MLMultiArray. Transferring UIImage to CVPixelBufferRef would be an easy thing.
Q: Converting UIImage to MLMultiArray for Keras Model In Python, I trained an image classification model with keras to receive input as a [224, 224, 3] array and output a prediction (1 or 0). When I load the save the model and load it into xcode, it states that the input has to be in MLMultiArray format. Is there a way for me to convert a UIImage into MLMultiArray format? Or is there a way for me change my keras model to accept CVPixelBuffer type objects as an input. A: When you convert the caffe model to MLModel, you need to add this line: image_input_names = 'data' Take my own transfer script as an example, the script should be like this: import coremltools coreml_model = coremltools.converters.caffe.convert(('gender_net.caffemodel', 'deploy_gender.prototxt'), image_input_names = 'data', class_labels = 'genderLabel.txt') coreml_model.save('GenderMLModel.mlmodel') And then your MLModel's input data will be CVPixelBufferRef instead of MLMultiArray. Transferring UIImage to CVPixelBufferRef would be an easy thing. A: Did not tried this, but here is how its done for the FOOD101 sample func preprocess(image: UIImage) -> MLMultiArray? { let size = CGSize(width: 299, height: 299) guard let pixels = image.resize(to: size).pixelData()?.map({ (Double($0) / 255.0 - 0.5) * 2 }) else { return nil } guard let array = try? MLMultiArray(shape: [3, 299, 299], dataType: .double) else { return nil } let r = pixels.enumerated().filter { $0.offset % 4 == 0 }.map { $0.element } let g = pixels.enumerated().filter { $0.offset % 4 == 1 }.map { $0.element } let b = pixels.enumerated().filter { $0.offset % 4 == 2 }.map { $0.element } let combination = r + g + b for (index, element) in combination.enumerated() { array[index] = NSNumber(value: element) } return array } https://github.com/ph1ps/Food101-CoreML A: In your Core ML conversion script you can supply the parameter image_input_names='data' where data is the name of your input. Now Core ML will treat this input as an image (CVPixelBuffer) instead of a multi-array.
stackoverflow
{ "language": "en", "length": 319, "provenance": "stackexchange_0000F.jsonl.gz:860892", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44529869" }
0d981efe0c23819295d5141af38f88820091a408
Stackoverflow Stackexchange Q: How to find maximal subgraph of bipartite graph with valence constraint? I have a bipartite graph. I'll refer to red-nodes and black-nodes of the respective disjoint sets. I would like to know how to find a connected induced subgraph that maximizes the number of red-nodes while ensuring that all black nodes in the subgraph have new valences less than or equal to 2. Where "induced" means that if two nodes are connected in the original graph and both exist in the subgraph then the edge between them is automatically included. Eventually I'd like to introduce non-negative edge-weights. Can this be reduced to a standard graph algorithm? Hopefully one with known complexity and simple implementation. It's clearly possible to grow a subgraph greedily. But is this best? A: I'm sure that this problem belongs to NP-complete class, so there is no easy way to solve it. I would suggest you using constraint satisfaction approach. There are quite a few ways to formulate your problem, for example mixed-integer programming, MaxSAT or even pseudo-boolean constraints. For the first try, I would recommend MiniZinc solver. For example, consider this example of defining and solving graph problems in MiniZinc.
Q: How to find maximal subgraph of bipartite graph with valence constraint? I have a bipartite graph. I'll refer to red-nodes and black-nodes of the respective disjoint sets. I would like to know how to find a connected induced subgraph that maximizes the number of red-nodes while ensuring that all black nodes in the subgraph have new valences less than or equal to 2. Where "induced" means that if two nodes are connected in the original graph and both exist in the subgraph then the edge between them is automatically included. Eventually I'd like to introduce non-negative edge-weights. Can this be reduced to a standard graph algorithm? Hopefully one with known complexity and simple implementation. It's clearly possible to grow a subgraph greedily. But is this best? A: I'm sure that this problem belongs to NP-complete class, so there is no easy way to solve it. I would suggest you using constraint satisfaction approach. There are quite a few ways to formulate your problem, for example mixed-integer programming, MaxSAT or even pseudo-boolean constraints. For the first try, I would recommend MiniZinc solver. For example, consider this example of defining and solving graph problems in MiniZinc. A: Unfortunately this is NP-hard, so there are probably no polynomial-time algorithms to solve it. Here is a reduction from the NP-hard problem Independent Set, where we are given a graph G = (V, E) (with n = |V| and m = |E|) and an integer k, and the task is to determine whether it is possible to find a set of k or more vertices such that no two vertices in the set are linked by an edge: * *For every vertex v_i in G, create a red vertex r_i in H. *For every edge (v_i, v_j) in G, create the following in H: * *a black vertex b_ij, *n+1 red vertices t_ijk (1 <= k <= n+1), *n black vertices u_ijk (1 <= k <= n), *n edges (t_ijk, u_ijk) (1 <= k <= n) *n edges (t_ijk, u_ij{k-1}) (2 <= k <= n+1) *the three edges (r_i, b_ij), (r_j, b_ij), and (t_ij1, b_ij). *For every pair of vertices v_i, v_j, create the following: * *a black vertex c_ij, *the two edges (r_i, c_ij) and (r_j, c_ij). *Set the threshold to m(n+1)+k. Call the set of all r_i R, the set of all b_ij B, the set of all c_ij C, the set of all t_ij T, and the set of all u_ij U. The general idea here is that we force each black vertex b_ij to choose at most 1 of the 2 red vertices r_i and r_j that correspond to the endpoints of the edge (i, j) in G. We do this by giving each of these b_ij vertices 3 outgoing edges, of which one (the one to t_ij1) is a "must-have" -- that is, any solution in which a t_ij1 vertex is not selected can be improved by selecting it, as well as the n other red vertices it connects to (via a "wiggling path" that alternates between vertices in t_ijk and vertices in u_ijk), getting rid of either r_i or r_j to restore the property that no black vertex has 3 or more neighbours in the solution if necessary, and then finally restoring connectedness by choosing vertices from C as necessary. (The c_ij vertices are "connectors": they exist only to ensure that whatever subset of R we include can be made into a single connected component.) Suppose first that there is an IS of size k in G. We will show that there is a connected induced subgraph X with at least m(n+1)+k red nodes in H, in which every black vertex has at most 2 neighbours in X. First, include in X the k vertices from R that correspond to the vertices in the IS (such a set must exist by assumption). Because these vertices form an IS, no vertex in B is adjacent to more than 1 of them, so for each vertex b_ij, we may safely add it, and the "wiggling path" of 2n+1 vertices beginning at t_ij1, into X as well. Each of these wiggling paths contains n+1 red vertices, and there are m such paths (one for each edge in G), so there are now m(n+1)+k red vertices in X. Finally, to ensure that X is connected, add to it every vertex c_ij such that r_i and r_j are both in X already: notice that this does not change the total number of red vertices in X. Now suppose that there is a connected induced subgraph X with at least m(n+1)+k red nodes in H, in which every black vertex has at most 2 neighbours in X. We will show that there is an IS in G of size k. The only red vertices in H are those in R and those in T. There are only n vertices in R, so if X does not contain all m wiggly paths, it must have at most (m-1)(n+1)+n = m(n+1)-1 red vertices, contradicting the assumption that it has at least m(n+1)+k red vertices. Thus X must contain all m wiggly paths. This leaves k other red vertices in X, which must be from R. No two of these vertices can be adjacent to the same vertex in B, since that B-vertex would then be adjacent to 3 vertices: thus, these k vertices correspond to an IS in G. Since a YES-instance of IS implies a YES-instance to the constructed instance of your problem and vice versa, the solution to the constructed instance of your problem corresponds exactly to the solution to the IS instance; and since the construction is clearly polynomial-time, this establishes that your problem is NP-hard.
stackoverflow
{ "language": "en", "length": 953, "provenance": "stackexchange_0000F.jsonl.gz:860947", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530025" }
2ac7a903b465b1dc17ef3a0b3ed6b520cb459613
Stackoverflow Stackexchange Q: How to ignore case when using str_detect? stringr package provides good string functions. To search for a string (ignoring case) one could use stringr::str_detect('TOYOTA subaru',ignore.case('toyota')) This works but gives warning Please use (fixed|coll|regex)(x, ignore_case = TRUE) instead of ignore.case(x) What is the right way of rewriting it? A: You can use the base R function grepl() to accomplish the same thing without a nested function. It simply accepts ignore.case as an argument. grepl("toyota", 'TOYOTA subaru', ignore.case = TRUE) (Note that the order of the first two arguments (pattern and string) are switched between grepl and str_detect).
Q: How to ignore case when using str_detect? stringr package provides good string functions. To search for a string (ignoring case) one could use stringr::str_detect('TOYOTA subaru',ignore.case('toyota')) This works but gives warning Please use (fixed|coll|regex)(x, ignore_case = TRUE) instead of ignore.case(x) What is the right way of rewriting it? A: You can use the base R function grepl() to accomplish the same thing without a nested function. It simply accepts ignore.case as an argument. grepl("toyota", 'TOYOTA subaru', ignore.case = TRUE) (Note that the order of the first two arguments (pattern and string) are switched between grepl and str_detect). A: You can use regex (or fixed as suggested in @lmo's comment depending on what you need) function to make the pattern as detailed in ?modifiers or ?str_detect (see the instruction for pattern parameter): library(stringr) str_detect('TOYOTA subaru', regex('toyota', ignore_case = T)) # [1] TRUE A: You can save a little typing with (?i): c("Toyota", "my TOYOTA", "your Subaru") %>% str_detect( "(?i)toyota" ) # [1] TRUE TRUE FALSE A: the search string must be inside function fixed and that function has valid parameter ignore_case str_detect('TOYOTA subaru', fixed('toyota', ignore_case=TRUE)) A: Or you can erase all capitalization while you search: str_detect(tolower('TOYOTA subaru'), 'toyota')
stackoverflow
{ "language": "en", "length": 197, "provenance": "stackexchange_0000F.jsonl.gz:860949", "question_score": "49", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530029" }
f279b29bcaa1032ac52d72d7dfbf597ff335c1b7
Stackoverflow Stackexchange Q: How do I extract a Future[String] from an akka Source[ByteString, _]? I am attempting to stream a file using akka streams and am running into a small issue extracting the results of the stream into a Future[String]: def streamMigrationFile(source: Source[ByteString, _]): Future[String] = { var fileString = "" val sink = Sink.foreach[ByteString](byteString => fileString = fileString.concat(byteString.decodeString("US-ASCII"))) source.runWith(sink) } I'm getting a compilation error: Expression of type Future[Done] does not conform to expected type Future[String] Can anyone help me understand what I'm doing wrong and what I need to do to extract the results of the stream? A: If what I'm guessing is right, you want to stream the whole file content into a string. This is best achieved with a Sink.fold, a not with a Sink.foreach. Example below. def streamMigrationFile(source: Source[ByteString, _]): Future[String] = { val sink = Sink.fold[String, ByteString]("") { case (acc, str) => acc + str.decodeString("US-ASCII") } source.runWith(sink) } You're probably aware of this, but your file will need to fit into memory for your program to run correctly.
Q: How do I extract a Future[String] from an akka Source[ByteString, _]? I am attempting to stream a file using akka streams and am running into a small issue extracting the results of the stream into a Future[String]: def streamMigrationFile(source: Source[ByteString, _]): Future[String] = { var fileString = "" val sink = Sink.foreach[ByteString](byteString => fileString = fileString.concat(byteString.decodeString("US-ASCII"))) source.runWith(sink) } I'm getting a compilation error: Expression of type Future[Done] does not conform to expected type Future[String] Can anyone help me understand what I'm doing wrong and what I need to do to extract the results of the stream? A: If what I'm guessing is right, you want to stream the whole file content into a string. This is best achieved with a Sink.fold, a not with a Sink.foreach. Example below. def streamMigrationFile(source: Source[ByteString, _]): Future[String] = { val sink = Sink.fold[String, ByteString]("") { case (acc, str) => acc + str.decodeString("US-ASCII") } source.runWith(sink) } You're probably aware of this, but your file will need to fit into memory for your program to run correctly. A: If you look at the definition of Sink.foreach you'll find the evaluation type is Sink[T, Future[Done]] which means it doesn't matter what will happen with the result of the computation of the elements in the stream. Following is the definition: def foreach[T](f: T ⇒ Unit): Sink[T, Future[Done]] On the other hand, the definition of Sink.fold evaluates to a Future[U] being U the type of the zero. In other words, you are able to define what will be the type of the future at the end of the processing. The following is the definition (and implementation) for Sink.fold: def fold[U, T](zero: U)(f: (U, T) ⇒ U): Sink[T, Future[U]] = Flow[T].fold(zero)(f).toMat(Sink.head)(Keep.right).named("foldSink") According to the implementation above you can see that the type to be kept in the materialization is Future[U] because of the Keep.right which means something like: "I don't care if the elements coming in are Ts (or ByteString in your case) I (the stream) will give you Us (or String in your case) .. when I'm done (in a Future)" The following is a working example of your case replacing the Sink.foreach with Sink.fold and evaluating the whole expression to Future[String] def streamMigrationFile(source: Source[ByteString, _]): Future[String] = { var fileString = "" //def foreach[T](f: T ⇒ Unit): Sink[T, Future[Done]] val sinkForEach: Sink[ByteString, Future[Done]] = Sink.foreach[ByteString](byteString => fileString = fileString.concat(byteString.decodeString("US-ASCII"))) /* def fold[U, T](zero: U)(f: (U, T) ⇒ U): Sink[T, Future[U]] = Flow[T].fold(zero)(f).toMat(Sink.head)(Keep.right).named("foldSink") */ val sinkFold: Sink[ByteString, Future[String]] = Sink.fold("") { case (acc, str) => acc + str } val res1: Future[Done] = source.runWith(sinkForEach) val res2: Future[String] = source.runWith(sinkFold) res2 }
stackoverflow
{ "language": "en", "length": 433, "provenance": "stackexchange_0000F.jsonl.gz:860961", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530074" }
58377f9fe4c8d1a8ab3303b28011c6e68513d35d
Stackoverflow Stackexchange Q: How can I sort a list of maps by value of some specific key using Java 8? How can I sort a List of Map<String, String> using Java 8? The map contains a key called last_name, and the value associated with it may be null. I'm not sure how to do it because the following results in a compiler error: List<Map<String, String>> peopleList = ... peopleList.sort(Comparator.comparing(Map::get, Comparator.nullsLast(Comparator.naturalOrder()))); Is using an anonymous class the only way to do it? Note that I am not trying to sort each map in the list. I want to sort the list itself based on a key in each map. A: Since your peopleList might contain a null and the Map::key might have a null value, you probably need to nullsLast twice: peopleList.sort(Comparator.nullsLast(Comparator.comparing(m -> m.get("last_name"), Comparator.nullsLast(Comparator.naturalOrder()))));
Q: How can I sort a list of maps by value of some specific key using Java 8? How can I sort a List of Map<String, String> using Java 8? The map contains a key called last_name, and the value associated with it may be null. I'm not sure how to do it because the following results in a compiler error: List<Map<String, String>> peopleList = ... peopleList.sort(Comparator.comparing(Map::get, Comparator.nullsLast(Comparator.naturalOrder()))); Is using an anonymous class the only way to do it? Note that I am not trying to sort each map in the list. I want to sort the list itself based on a key in each map. A: Since your peopleList might contain a null and the Map::key might have a null value, you probably need to nullsLast twice: peopleList.sort(Comparator.nullsLast(Comparator.comparing(m -> m.get("last_name"), Comparator.nullsLast(Comparator.naturalOrder())))); A: This should fit your requirement. peopleList.sort((o1, o2) -> o1.get("last_name").compareTo(o2.get("last_name"))); In case you want handle the null pointer try this "old fashion" solution: peopleList.sort((o1, o2) -> { String v1 = o1.get("last_name"); String v2 = o2.get("last_name"); return (v1 == v2) ? 0 : (v1 == null ? 1 : (v2 == null ? -1 : v1.compareTo(v2))) ; }); Switch 1 and -1 if you want the null values first or last. For thoroughness' sake I've added the generator of useful test cases: Random random = new Random(); random.setSeed(System.currentTimeMillis()); IntStream.range(0, random.nextInt(20)).forEach(i -> { Map<String, String> map1 = new HashMap<String, String>(); String name = new BigInteger(130, new SecureRandom()).toString(6); if (random.nextBoolean()) name = null; map1.put("last_name", name); peopleList.add(map1); }); A: It looks like you can rewrite your code like peopleList.sort(Comparator.comparing( m -> m.get("yourKey"), Comparator.nullsLast(Comparator.naturalOrder())) ) A: You can override de compare method of the Collection. Here is an example: https://www.mkyong.com/java8/java-8-lambda-comparator-example/ You will have to compare the last_name of the objects, and do not forget to handle the null properly. A: Try this: By Keys: Map<String, String> result = unsortMap.entrySet().stream() .sorted(Map.Entry.comparingByKey()) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (oldValue, newValue) -> oldValue, LinkedHashMap::new)); By Values: Map<String, String> result = unsortMap.entrySet().stream() .sorted(Map.Entry.comparingByValue(Comparator.reverseOrder())) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (oldValue, newValue) -> oldValue, LinkedHashMap::new)); A: Steps to sort a Map in Java 8. 1]Convert a Map into a Stream 2]Sort it 3]Collect and return a new LinkedHashMap (keep the order) Map result = map.entrySet().stream() .sorted(Map.Entry.comparingByKey()) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (oldValue, newValue) -> oldValue, LinkedHashMap::new));
stackoverflow
{ "language": "en", "length": 367, "provenance": "stackexchange_0000F.jsonl.gz:861022", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530263" }
57bc911944c7a2130c9f99dfaa8d56840cea8f39
Stackoverflow Stackexchange Q: Use Chokidar for watching over files with specific extentions I need to add support for this feature to my app. My current implementation is very simple: this.watcher.on("add", (pathName: string) => { this.sendNotifyAction(new NotifyAction(PathEvent.Add, pathName)); }).on("change", (pathName: string) => { this.sendNotifyAction(new NotifyAction(PathEvent.Change, pathName)); }).on("unlink", (pathName: string) => { this.sendNotifyAction(new NotifyAction(PathEvent.Delete, pathName)); }).on("ready", () => { this.sendReadinessNotification(); }); Now I want to have something like: private acceptedFileExtensions: string[] = ['.txt', '.docx', '.xlx', ...] And use this array of extensions inside Chokidar. So if the file in watched directory has extension from the list - send notification, if no - do nothing. I saw similar question https://stackoverflow.com/questions/40468608/use-chokidar-to-watch-for-specific-file-extension#=, but it's not what I really need. Filtering inside callback functions doesn't look good for me, but I don't see other variants. Please advise. Thank you. A: Thank you @robertklep, chokidar works with arrays. So my code looks like: private buildWildcardList(path:string): string[] { let result: string[] = []; _.each(this.acceptedFileExtensions, (extension: string) => { result.push(path + '/**/*' + extension); }); return result; } let wildcardList: string[] = this.buildWildcardList(path); this.watcher = chokidar.watch(wildcardList, watchOptions);
Q: Use Chokidar for watching over files with specific extentions I need to add support for this feature to my app. My current implementation is very simple: this.watcher.on("add", (pathName: string) => { this.sendNotifyAction(new NotifyAction(PathEvent.Add, pathName)); }).on("change", (pathName: string) => { this.sendNotifyAction(new NotifyAction(PathEvent.Change, pathName)); }).on("unlink", (pathName: string) => { this.sendNotifyAction(new NotifyAction(PathEvent.Delete, pathName)); }).on("ready", () => { this.sendReadinessNotification(); }); Now I want to have something like: private acceptedFileExtensions: string[] = ['.txt', '.docx', '.xlx', ...] And use this array of extensions inside Chokidar. So if the file in watched directory has extension from the list - send notification, if no - do nothing. I saw similar question https://stackoverflow.com/questions/40468608/use-chokidar-to-watch-for-specific-file-extension#=, but it's not what I really need. Filtering inside callback functions doesn't look good for me, but I don't see other variants. Please advise. Thank you. A: Thank you @robertklep, chokidar works with arrays. So my code looks like: private buildWildcardList(path:string): string[] { let result: string[] = []; _.each(this.acceptedFileExtensions, (extension: string) => { result.push(path + '/**/*' + extension); }); return result; } let wildcardList: string[] = this.buildWildcardList(path); this.watcher = chokidar.watch(wildcardList, watchOptions);
stackoverflow
{ "language": "en", "length": 176, "provenance": "stackexchange_0000F.jsonl.gz:861098", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530477" }
039e451927fa2f36f1b7e19a5f7a670d6e26b721
Stackoverflow Stackexchange Q: Password confirmation in Rails 5 This is my view: <%=form_for [:admin, @user] do |f|%> <ul> <% @user.errors.full_messages.each do |msg| %> <li><%= msg %></li> <% end %> </ul> <%=f.label :name %> <%=f.text_field :name %> <%=f.label :email %> <%=f.text_field :email %> <%=f.label :password %> <%=f.password_field :password %> <%=f.label :password_confirmation %> <%=f.password_field :password_confirmation%> <%=f.submit "Submit" %> <%end%> Controller code for adding user: def create @user = User.new(user_params) if @user.save redirect_to admin_users_path else render 'new' end end private def user_params params.require(:user).permit(:name, :email, :password) end These are validations in the model: validates :name, presence: true validates :email, presence: true validates :password, presence: true validates :password, confirmation: { case_sensitive: true } But confirmation password doesn't work. Validation works for all (they are required) form elements, except second password input - password_confirmation which can be different from first password input. User is added to the database even if second password input is empty, because in the validation rules, there is no rule for that . What am I doing wrong ? A: You need to add password_confirmation to user_params in the controller. I.e. def user_params params.require(:user).permit(:name, :email, :password, :password_confirmation) end
Q: Password confirmation in Rails 5 This is my view: <%=form_for [:admin, @user] do |f|%> <ul> <% @user.errors.full_messages.each do |msg| %> <li><%= msg %></li> <% end %> </ul> <%=f.label :name %> <%=f.text_field :name %> <%=f.label :email %> <%=f.text_field :email %> <%=f.label :password %> <%=f.password_field :password %> <%=f.label :password_confirmation %> <%=f.password_field :password_confirmation%> <%=f.submit "Submit" %> <%end%> Controller code for adding user: def create @user = User.new(user_params) if @user.save redirect_to admin_users_path else render 'new' end end private def user_params params.require(:user).permit(:name, :email, :password) end These are validations in the model: validates :name, presence: true validates :email, presence: true validates :password, presence: true validates :password, confirmation: { case_sensitive: true } But confirmation password doesn't work. Validation works for all (they are required) form elements, except second password input - password_confirmation which can be different from first password input. User is added to the database even if second password input is empty, because in the validation rules, there is no rule for that . What am I doing wrong ? A: You need to add password_confirmation to user_params in the controller. I.e. def user_params params.require(:user).permit(:name, :email, :password, :password_confirmation) end A: Try: validates_confirmation_of :password Model: class Person < ActiveRecord::Base validates_confirmation_of :user_name, :password validates_confirmation_of :email_address, :message => "should match confirmation" end View: <%= password_field "person", "password" %> <%= password_field "person", "password_confirmation" %> You could take a look to the validates_confirmation_of.
stackoverflow
{ "language": "en", "length": 223, "provenance": "stackexchange_0000F.jsonl.gz:861103", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530504" }
e1ba6d218630cf70c2634c3eccdea7270a896a68
Stackoverflow Stackexchange Q: Automatically associate new Sonar project with custom quality profile and quality gate Our use case for Sonar creates new Sonar projects for each branch of our repository. How do we automatically associate the new branch project with a (non-default) Quality Profile and Quality Gate? We're running this in a Maven project if that's relevant. A: We had the same issue, within our company, and the only solution was to use the deprecated attribute sonar.profile (https://docs.sonarqube.org/display/SONAR/Analysis+Parameters). Sidenote: Generally there is also a interesting view on how to analyze branches. The general recommendation from sonarSource suggests to only use preview modes for short living branches. As a fact bitbucket-plugins with a richer featureset than just commenting issues, sadly need branch based analysis. https://jira.sonarsource.com/browse/SONAR-5370 - the property will be removed in 4.5.1 based on the sonar task
Q: Automatically associate new Sonar project with custom quality profile and quality gate Our use case for Sonar creates new Sonar projects for each branch of our repository. How do we automatically associate the new branch project with a (non-default) Quality Profile and Quality Gate? We're running this in a Maven project if that's relevant. A: We had the same issue, within our company, and the only solution was to use the deprecated attribute sonar.profile (https://docs.sonarqube.org/display/SONAR/Analysis+Parameters). Sidenote: Generally there is also a interesting view on how to analyze branches. The general recommendation from sonarSource suggests to only use preview modes for short living branches. As a fact bitbucket-plugins with a richer featureset than just commenting issues, sadly need branch based analysis. https://jira.sonarsource.com/browse/SONAR-5370 - the property will be removed in 4.5.1 based on the sonar task A: Use the api/projects/create web service to provision your projects. You can then call api/qualityprofiles/add_project to assign your new project to the proper profiles. (You'll need to have first looked up the profile id's tho with api/qualityprofiles/search.)
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:861120", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530552" }
7c25a20debdd9f7e6fcf9916aa5984df5b6d15a5
Stackoverflow Stackexchange Q: Mockito+PowerMock gradle configuration I need to use in my android instrumented tests mockito and powermock. The main problem is that both of them have some problems with configuring it in gradle because of conflicts and other stuff. Maybe somebody who has working configuration of .gradle file for mockito+powermock in android instrumented tests could share it? A: This is my gradle configuration to use mockito and powerMock: dependencies { ... /**Power mock**/ testCompile "org.powermock:powermock-core:1.7.3" testCompile "org.powermock:powermock-module-junit4:1.7.3" testCompile "org.powermock:powermock-api-mockito2:1.7.3" /**End of power mock **/ } NOTE: I had to remove the mockito dependency in order to make it works: //Remove this line testImplementation "org.mockito:mockito-core:2.13.0"
Q: Mockito+PowerMock gradle configuration I need to use in my android instrumented tests mockito and powermock. The main problem is that both of them have some problems with configuring it in gradle because of conflicts and other stuff. Maybe somebody who has working configuration of .gradle file for mockito+powermock in android instrumented tests could share it? A: This is my gradle configuration to use mockito and powerMock: dependencies { ... /**Power mock**/ testCompile "org.powermock:powermock-core:1.7.3" testCompile "org.powermock:powermock-module-junit4:1.7.3" testCompile "org.powermock:powermock-api-mockito2:1.7.3" /**End of power mock **/ } NOTE: I had to remove the mockito dependency in order to make it works: //Remove this line testImplementation "org.mockito:mockito-core:2.13.0" A: Here is the configuration I am using and it's working perfectly fine. after 1.7.0 powermock-api-mockito change to powermock-api-mockito2 testImplementation 'org.mockito:mockito-all:1.10.19' testImplementation "org.powermock:powermock-module-junit4:2.0.7" testImplementation "org.powermock:powermock-module-junit4-rule:2.0.7" testImplementation "org.powermock:powermock-api-mockito2:2.0.7" testImplementation "org.powermock:powermock-classloading-xstream:1.6.6"
stackoverflow
{ "language": "en", "length": 132, "provenance": "stackexchange_0000F.jsonl.gz:861122", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530558" }
4ff4e273a26d57160b1cecb595267c27b712b0ad
Stackoverflow Stackexchange Q: Mongomapper scope concatenation I'm experiencing strange behaviour with mongomapper scope concatenation. Below an example. I have two scopes: scope :active, where( :name => { :$in => Model2.active.distinct(:city) } ) scope :by_htype_id, lambda{|htype_id| where( :name => { :$in => Model2.by_htype_id(htype_id).distinct(:city) } ) } If I run Model1.by_htype_id("some_id") it works as expected but if I concatenate the two scopes Model1.active.by_htype_id("some_id") I obtain all the result from the active scope while I would expect to obtain the subset of active scope that depends on by_htype_id EDIT: If i write the concatenation of the scope as a single query it works as expected. I would expect the concatenation to result in and combination of the two scopes. As I said I'm having the problem just concatenating some scopes, not with every scope.
Q: Mongomapper scope concatenation I'm experiencing strange behaviour with mongomapper scope concatenation. Below an example. I have two scopes: scope :active, where( :name => { :$in => Model2.active.distinct(:city) } ) scope :by_htype_id, lambda{|htype_id| where( :name => { :$in => Model2.by_htype_id(htype_id).distinct(:city) } ) } If I run Model1.by_htype_id("some_id") it works as expected but if I concatenate the two scopes Model1.active.by_htype_id("some_id") I obtain all the result from the active scope while I would expect to obtain the subset of active scope that depends on by_htype_id EDIT: If i write the concatenation of the scope as a single query it works as expected. I would expect the concatenation to result in and combination of the two scopes. As I said I'm having the problem just concatenating some scopes, not with every scope.
stackoverflow
{ "language": "en", "length": 129, "provenance": "stackexchange_0000F.jsonl.gz:861187", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530747" }
b3b405d98fee3e1e3eaf028ab1e4c0fffdd35fea
Stackoverflow Stackexchange Q: formats for pyspark.sql.DataFrameWriter.saveAsTable() Does anyone know where I can find a list of available formats for the saveAsTable() function in pyspark.sql.DataFrameWriter? In the documentation it just says "the format used to save." Every example I see uses 'parquet' but I can't find anything else mentioned. Specifically, I would like to save to Feather somehow out of pyspark. Thank you! A: Hi to my knowledge the out of the box supported commands per the source code https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala for format are: * *Parquet *JSON *orc *JDBC *text *csv *source (simply takes the source format of what you are saving) So Feather is not supported out of the box for saveAsTable(). Depending on your setup you could try to save directly to HDFS, which would like something like :" import feather path= "my_data.feather" #this would then be your full hdfs URI feather.write_dataframe(df, path) (Taken from the feather integration tests: https://github.com/wesm/feather/blob/6b5a27c58d1e850f4eabb8c013e0976b8844eb3c/integration-tests/test_roundtrips.py) Hopefully this was helpful, let me know if anything was wrong or unclear.
Q: formats for pyspark.sql.DataFrameWriter.saveAsTable() Does anyone know where I can find a list of available formats for the saveAsTable() function in pyspark.sql.DataFrameWriter? In the documentation it just says "the format used to save." Every example I see uses 'parquet' but I can't find anything else mentioned. Specifically, I would like to save to Feather somehow out of pyspark. Thank you! A: Hi to my knowledge the out of the box supported commands per the source code https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala for format are: * *Parquet *JSON *orc *JDBC *text *csv *source (simply takes the source format of what you are saving) So Feather is not supported out of the box for saveAsTable(). Depending on your setup you could try to save directly to HDFS, which would like something like :" import feather path= "my_data.feather" #this would then be your full hdfs URI feather.write_dataframe(df, path) (Taken from the feather integration tests: https://github.com/wesm/feather/blob/6b5a27c58d1e850f4eabb8c013e0976b8844eb3c/integration-tests/test_roundtrips.py) Hopefully this was helpful, let me know if anything was wrong or unclear.
stackoverflow
{ "language": "en", "length": 161, "provenance": "stackexchange_0000F.jsonl.gz:861207", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530799" }
4051948d46cbb71ff2a5971a999b2bc69a7c4d24
Stackoverflow Stackexchange Q: Is merge without conflicts equivalent to rebase without conflicts? Is it correct that a git merge will succeed with no conflicts if and only if the corresponding git rebase will succeed with no conflicts? A: No, and in fact there's a trivial case where merge works fine but rebase doesn't: ...--o--A <-- mainline \ B--C--!C <-- branch where C is a commit that conflicts with A, and !C is its reversion. Merging branch back into mainline is equivalent in source-tree effect to merging commit B back into mainline, while rebasing copies both C (which conflicts with A) and then !C (which when being resolved, also conflicts with A). Of course, you can rebase interactively and simply drop both C and !C in this case, but in more complex chains, you can see how a commit might conflict with A but a subsequent commit might effectively resolve that conflict "in advance", so that merging the tip of the branch back into the mainline has no conflicts.
Q: Is merge without conflicts equivalent to rebase without conflicts? Is it correct that a git merge will succeed with no conflicts if and only if the corresponding git rebase will succeed with no conflicts? A: No, and in fact there's a trivial case where merge works fine but rebase doesn't: ...--o--A <-- mainline \ B--C--!C <-- branch where C is a commit that conflicts with A, and !C is its reversion. Merging branch back into mainline is equivalent in source-tree effect to merging commit B back into mainline, while rebasing copies both C (which conflicts with A) and then !C (which when being resolved, also conflicts with A). Of course, you can rebase interactively and simply drop both C and !C in this case, but in more complex chains, you can see how a commit might conflict with A but a subsequent commit might effectively resolve that conflict "in advance", so that merging the tip of the branch back into the mainline has no conflicts.
stackoverflow
{ "language": "en", "length": 166, "provenance": "stackexchange_0000F.jsonl.gz:861217", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530822" }
243d13032ac4436289292bf8223752d886750c0d
Stackoverflow Stackexchange Q: Why do I need a server with create-react-app I recently created a website with create-react-app. Since this is not a web app, why do I need a server to see it? I tried to open the index file in build folder but it doesn't work unless I'm serving it from a server. A: You can set the homepage root url in the package.json file e.g. { ..., "homepage":"file:///<path to build directory" } npm run build This will now find the static content in the build directory from the file system without a server.
Q: Why do I need a server with create-react-app I recently created a website with create-react-app. Since this is not a web app, why do I need a server to see it? I tried to open the index file in build folder but it doesn't work unless I'm serving it from a server. A: You can set the homepage root url in the package.json file e.g. { ..., "homepage":"file:///<path to build directory" } npm run build This will now find the static content in the build directory from the file system without a server.
stackoverflow
{ "language": "en", "length": 94, "provenance": "stackexchange_0000F.jsonl.gz:861231", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530861" }
eb58e816e41f161b447ec1d7322200e48e0db73d
Stackoverflow Stackexchange Q: Can a progressive web app (PWA) run a background service on a mobile device to get data from hardware (accelerometer, gps...)? I see we can check the capabilities of a mobile browser using https://whatwebcando.today/, but can the hardware APIs be queried when not running on foreground? I mean... With PWA am I able to build an app that gets hardware info while running in background, just like Octo U, for Android, and posts that info to a web server? A: The modern method of running code "in the background" is by using a service worker, either via its push event handler (triggered via an incoming push message), or via its sync event handler (triggered by an automatic replay of a task that previously failed). It's not currently possible to access the type of hardware sensors that you're asking about from inside a service worker.
Q: Can a progressive web app (PWA) run a background service on a mobile device to get data from hardware (accelerometer, gps...)? I see we can check the capabilities of a mobile browser using https://whatwebcando.today/, but can the hardware APIs be queried when not running on foreground? I mean... With PWA am I able to build an app that gets hardware info while running in background, just like Octo U, for Android, and posts that info to a web server? A: The modern method of running code "in the background" is by using a service worker, either via its push event handler (triggered via an incoming push message), or via its sync event handler (triggered by an automatic replay of a task that previously failed). It's not currently possible to access the type of hardware sensors that you're asking about from inside a service worker. A: service workers run on an event driven model. This means they only spin up when registered events (browser UI making a network request, push notification and background sync for now). What I think you are asking for is geo-fenching capabilities. AFAIK this is something being discussed to add to the SW model. If not it should be because it would be very valuable for marketing purposes. I know it is being used in native apps, so I think it would be on the radar. GPS is accessible from the front-end and has been for years in the browser. However the user would need to have your site/PWA loaded in the browser.
stackoverflow
{ "language": "en", "length": 257, "provenance": "stackexchange_0000F.jsonl.gz:861236", "question_score": "40", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530885" }
7069eeaafbeb5acdbb48d368695ad987bfc5b4a3
Stackoverflow Stackexchange Q: how to get nested xml tags in XSL? I am working on a code to generate csv file from xml using XSL. I have to access a tag in one level up or two level. e.g i have to access <value> tag once and <embeddeddata> tag in loop <root> <row> <data> <value>someValue </value> </data> <dtl> <embeddeddata> <col1>col1 </col1> </embeddeddata> <embeddeddata> <col1>col2 </col1> </embeddeddata> </dtl> </row> </root> A: try accessing the element by ../../ similar to the linux cd command to reach the element in the loop for
Q: how to get nested xml tags in XSL? I am working on a code to generate csv file from xml using XSL. I have to access a tag in one level up or two level. e.g i have to access <value> tag once and <embeddeddata> tag in loop <root> <row> <data> <value>someValue </value> </data> <dtl> <embeddeddata> <col1>col1 </col1> </embeddeddata> <embeddeddata> <col1>col2 </col1> </embeddeddata> </dtl> </row> </root> A: try accessing the element by ../../ similar to the linux cd command to reach the element in the loop for A: try accessing elements in XLS file by ../../
stackoverflow
{ "language": "en", "length": 97, "provenance": "stackexchange_0000F.jsonl.gz:861239", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530894" }
e5ffbc71a46bb6fbfd43429a34fd1470e4c72428
Stackoverflow Stackexchange Q: Cannot Import LinearRegression from Sklearn from sklearn.linear_model import LinearRegression gives me this error in Jupyter Notebook: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-127-36ba82e2d702> in <module>() ----> 1 from sklearn.linear_model import LinearRegression 2 3 lin_reg = LinearRegression() 4 lin_reg.fit(housing_prepared, housing_labels) C:\Users\David\Anaconda2\lib\site-packages\sklearn\linear_model\__init__.py in <module>() 19 MultiTaskElasticNet, MultiTaskElasticNetCV, 20 MultiTaskLassoCV) ---> 21 from .huber import HuberRegressor 22 from .sgd_fast import Hinge, Log, ModifiedHuber, SquaredLoss, Huber 23 from .stochastic_gradient import SGDClassifier, SGDRegressor C:\Users\David\Anaconda2\lib\site-packages\sklearn\linear_model\huber.py in <module>() 10 from ..utils import check_X_y 11 from ..utils import check_consistent_length ---> 12 from ..utils import axis0_safe_slice 13 from ..utils.extmath import safe_sparse_dot 14 ImportError: cannot import name axis0_safe_slice I can import things from sklearn.preprocessing fine. Thanks for your help! A: Don't know what the exact issue was, but uninstalling and reinstalling scikit-learn fixed this for me: pip uninstall scikit-learn pip install scikit-learn
Q: Cannot Import LinearRegression from Sklearn from sklearn.linear_model import LinearRegression gives me this error in Jupyter Notebook: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-127-36ba82e2d702> in <module>() ----> 1 from sklearn.linear_model import LinearRegression 2 3 lin_reg = LinearRegression() 4 lin_reg.fit(housing_prepared, housing_labels) C:\Users\David\Anaconda2\lib\site-packages\sklearn\linear_model\__init__.py in <module>() 19 MultiTaskElasticNet, MultiTaskElasticNetCV, 20 MultiTaskLassoCV) ---> 21 from .huber import HuberRegressor 22 from .sgd_fast import Hinge, Log, ModifiedHuber, SquaredLoss, Huber 23 from .stochastic_gradient import SGDClassifier, SGDRegressor C:\Users\David\Anaconda2\lib\site-packages\sklearn\linear_model\huber.py in <module>() 10 from ..utils import check_X_y 11 from ..utils import check_consistent_length ---> 12 from ..utils import axis0_safe_slice 13 from ..utils.extmath import safe_sparse_dot 14 ImportError: cannot import name axis0_safe_slice I can import things from sklearn.preprocessing fine. Thanks for your help! A: Don't know what the exact issue was, but uninstalling and reinstalling scikit-learn fixed this for me: pip uninstall scikit-learn pip install scikit-learn
stackoverflow
{ "language": "en", "length": 134, "provenance": "stackexchange_0000F.jsonl.gz:861244", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530903" }
f630c07ee3e3102dadc315b30ff897cbb6b588fe
Stackoverflow Stackexchange Q: Password Requirements when making an account with Firebase What are the requirements to avoid throwing the "auth/weak-password" error code on the firebase.auth().createUserWithEmailAndPassword(email, password). I would like to show the user the requirements so they don't have to throw the error several times on the way to create an account. I have checked the firebase documentation: https://firebase.google.com/docs/reference/js/firebase.auth.Auth and stack overflow but have not found this info. A: The only weakness test I'm aware of is length less than 6 characters: public final class FirebaseAuthWeakPasswordException extends FirebaseAuthInvalidCredentialsException Thrown when using a weak password (less than 6 chars) to create a new account or to update an existing account's password. Use getReason() to get a message with the reason the validation failed that you can display to your users. Excerpted from this documentation.
Q: Password Requirements when making an account with Firebase What are the requirements to avoid throwing the "auth/weak-password" error code on the firebase.auth().createUserWithEmailAndPassword(email, password). I would like to show the user the requirements so they don't have to throw the error several times on the way to create an account. I have checked the firebase documentation: https://firebase.google.com/docs/reference/js/firebase.auth.Auth and stack overflow but have not found this info. A: The only weakness test I'm aware of is length less than 6 characters: public final class FirebaseAuthWeakPasswordException extends FirebaseAuthInvalidCredentialsException Thrown when using a weak password (less than 6 chars) to create a new account or to update an existing account's password. Use getReason() to get a message with the reason the validation failed that you can display to your users. Excerpted from this documentation.
stackoverflow
{ "language": "en", "length": 131, "provenance": "stackexchange_0000F.jsonl.gz:861267", "question_score": "12", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44530984" }
0bdc95b39afa5ad362be73d599c8c94025c36905
Stackoverflow Stackexchange Q: How to use post steps with Jenkins pipeline on multiple agents? When using the Jenkins pipeline where each stage runs on a different agent, it is good practice to use agent none at the beginning: pipeline { agent none stages { stage('Checkout') { agent { label 'master' } steps { script { currentBuild.result = 'SUCCESS' } } } stage('Build') { agent { label 'someagent' } steps { bat "exit 1" } } } post { always { step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "[email protected]", sendToIndividuals: true]) } } } But doing this leads to Required context class hudson.FilePath is missing error message when the email should go out: [Pipeline] { (Declarative: Post Actions) [Pipeline] step Required context class hudson.FilePath is missing Perhaps you forgot to surround the code with a step that provides this, such as: node [Pipeline] error [Pipeline] } When I change from agent none to agent any, it works fine. How can I get the post step to work without using agent any? A: wrap the step that does the mailing in a node step: post { always { node('awesome_node_label') { step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "[email protected]", sendToIndividuals: true]) } } }
Q: How to use post steps with Jenkins pipeline on multiple agents? When using the Jenkins pipeline where each stage runs on a different agent, it is good practice to use agent none at the beginning: pipeline { agent none stages { stage('Checkout') { agent { label 'master' } steps { script { currentBuild.result = 'SUCCESS' } } } stage('Build') { agent { label 'someagent' } steps { bat "exit 1" } } } post { always { step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "[email protected]", sendToIndividuals: true]) } } } But doing this leads to Required context class hudson.FilePath is missing error message when the email should go out: [Pipeline] { (Declarative: Post Actions) [Pipeline] step Required context class hudson.FilePath is missing Perhaps you forgot to surround the code with a step that provides this, such as: node [Pipeline] error [Pipeline] } When I change from agent none to agent any, it works fine. How can I get the post step to work without using agent any? A: wrap the step that does the mailing in a node step: post { always { node('awesome_node_label') { step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "[email protected]", sendToIndividuals: true]) } } } A: I know this is old but I stumbled on this looking for something related. If you want to run the post step on any node, you can use post { always { node(null) { step([$class: 'Mailer', notifyEveryUnstableBuild: true, recipients: "[email protected]", sendToIndividuals: true]) } } } https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#node-allocate-node Says that the label may be left blank. Many times in a declarative pipeline if something is left blank this results in an error. To work around this, setting it to null will often work.
stackoverflow
{ "language": "en", "length": 277, "provenance": "stackexchange_0000F.jsonl.gz:861273", "question_score": "36", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531003" }
4e3eb5161ec0ef5d49273ea59d033405de4eceb1
Stackoverflow Stackexchange Q: destructuring in lambda function returns unexpected value Correct, expected value returned when function used with destructuring: [{"k":"key1","v":"val1"},{"k":"key2","v":"val2"},{"k":"key3","v":"val3"}] console.log(JSON.stringify([{ k: 'key1', v: 'val1', z: 'z1' }, { k: 'key2', v: 'val2', z: 'z2' }, { k: 'key3', v: 'val3', z: 'z3' }].map(function(x) { let {k, v} = x; return {k, v }; }))); However, when lambda function is used with destructuring, incorrect value returned: [{"k":"key1","v":"val1","z":"z1"},{"k":"key2","v":"val2","z":"z2"},{"k":"key3","v":"val3","z":"z3"}] console.log(JSON.stringify([{ k: 'key1', v: 'val1', z: 'z1' }, { k: 'key2', v: 'val2', z: 'z2' }, { k: 'key3', v: 'val3', z: 'z3' }].map(x => ({k, v} = x) ))); How would I use destructuring inside lambdas function such that it returns the same as using explicit function() above? A: You could use destructuring inside of the parameters of the callback of Array#map let array = [{ k: 'key1', v: 'val1', z: 'z1' }, { k: 'key2', v: 'val2', z: 'z2' }, { k: 'key3', v: 'val3', z: 'z3' }]; console.log(array.map(({ k, v }) => ({ k, v }))); .as-console-wrapper { max-height: 100% !important; top: 0; }
Q: destructuring in lambda function returns unexpected value Correct, expected value returned when function used with destructuring: [{"k":"key1","v":"val1"},{"k":"key2","v":"val2"},{"k":"key3","v":"val3"}] console.log(JSON.stringify([{ k: 'key1', v: 'val1', z: 'z1' }, { k: 'key2', v: 'val2', z: 'z2' }, { k: 'key3', v: 'val3', z: 'z3' }].map(function(x) { let {k, v} = x; return {k, v }; }))); However, when lambda function is used with destructuring, incorrect value returned: [{"k":"key1","v":"val1","z":"z1"},{"k":"key2","v":"val2","z":"z2"},{"k":"key3","v":"val3","z":"z3"}] console.log(JSON.stringify([{ k: 'key1', v: 'val1', z: 'z1' }, { k: 'key2', v: 'val2', z: 'z2' }, { k: 'key3', v: 'val3', z: 'z3' }].map(x => ({k, v} = x) ))); How would I use destructuring inside lambdas function such that it returns the same as using explicit function() above? A: You could use destructuring inside of the parameters of the callback of Array#map let array = [{ k: 'key1', v: 'val1', z: 'z1' }, { k: 'key2', v: 'val2', z: 'z2' }, { k: 'key3', v: 'val3', z: 'z3' }]; console.log(array.map(({ k, v }) => ({ k, v }))); .as-console-wrapper { max-height: 100% !important; top: 0; } A: This is caused beceause this code {k, v} = x; is not actually returning {k, v} but x. So you need to destruct object instead first of returning destructive assigment. A: The expression ({k, v} = x) assigns to the global k and v variables, and returns the right hand side value x. So you've essentially got an identity function x => x. You should use destructuring in the parameters, and build an object literal as the return value: […].map( ({k, v}) => ({k, v}) ); A: This works for me: console.log(JSON.stringify([{ k: 'key1', v: 'val1', z: 'z1' }, { k: 'key2', v: 'val2', z: 'z2' }, { k: 'key3', v: 'val3', z: 'z3' }].map(x => { let { k, v } = x; return { k, v }; })));
stackoverflow
{ "language": "en", "length": 303, "provenance": "stackexchange_0000F.jsonl.gz:861340", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531201" }
ca8b8a6b725789b2813be3053ea1fbe3c02da93d
Stackoverflow Stackexchange Q: Best way to handle undefined values in ReactJS? I'm accessing an API with ReactJS. What is the best way to stop React Component crashing when it's accessing a property in the object provided by the API that may be 'undefined'? An example of an error is: TypeError: Cannot read property 'items' of undefined A: * *In simple function you do it simply by if statement. if(typeof x !=='undefined' && typeof x.item !=='undefined'){ } * *in JSX you do it in this way. render(){ return( <div> (typeof x !=='undefined' && typeof x.item !=='undefined')? <div>success</div>: <div>fail</div> </div> ) } <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>
Q: Best way to handle undefined values in ReactJS? I'm accessing an API with ReactJS. What is the best way to stop React Component crashing when it's accessing a property in the object provided by the API that may be 'undefined'? An example of an error is: TypeError: Cannot read property 'items' of undefined A: * *In simple function you do it simply by if statement. if(typeof x !=='undefined' && typeof x.item !=='undefined'){ } * *in JSX you do it in this way. render(){ return( <div> (typeof x !=='undefined' && typeof x.item !=='undefined')? <div>success</div>: <div>fail</div> </div> ) } <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script> A: It looks like you're trying to access the property items of a variable x. And if x is undefined, then calling x.items will give you the error you mentioned. Doing a simple: if (x) { // CODE here } or if (x && x.items) { // ensures both x and x.items are not undefined // CODE here } EDIT: You can now use Optional Chaining, which looks sweet: if (x?.items) A: This post talks about a few error handling strategy in your react app. But in your case, I think using try-catch clause would be the most convenient. let results; const resultsFallback = { items: [] }; try { // assign results to res // res would be an object that you get from API call results = res.items; // do stuff with items here res.items.map(e => { // do some stuff with elements in items property }) } catch(e) { // something wrong when getting results, set // results to a fallback object. results = resultsFallback; } I assume that you are using this only for one particular pesky react component. If you want to handle similar type of error, I suggest you use ReactTryCatchBatchingStrategy in the blog post above. A: Best way to check for any such issue is to run your test code in google's console. Like for a null check, one can simply check if(!x) or if(x==undefined) A: The optional chaining operator provides a way to simplify accessing values through connected objects when it's possible that a reference or function may be undefined or null. let customer = { name: "Carl", details: { age: 82, location: "Paradise Falls" // detailed address is unknown } }; let customerCity = customer.details?.address?.city; A: Simply you can use the condition if (var){ // Statement } else { // Statement }
stackoverflow
{ "language": "en", "length": 403, "provenance": "stackexchange_0000F.jsonl.gz:861342", "question_score": "22", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531204" }
71a5a1092700d0eba268b39c7020bd16abffc3f6
Stackoverflow Stackexchange Q: How to get actual value of global variables in llvm For example: int x=0; int y=0; where x and y are global variables, and in main() function we do the following: x++; y++; How to get the newest value of global variables x and y in llvm. when I try to do errs()<<g; they give the initial value as @BB0 = global i32 but I need to get the actual value like x=1, by using llvm. A: Assuming you're using LLVM's API: If the global is constant you can access its initialization value directly, for example: Constant* myGlobal = new GlobalVariable( myLlvmModule, myLlvmType, true, GlobalValue::InternalLinkage, initializationValue ); ... Constant* constValue = myGlobal->getInitializer(); And if that value is of e.g. integer type, you can retrieve it like so: ConstantInt* constInt = cast<ConstantInt>( constValue ); int64_t constIntValue = constInt->getSExtValue(); If the global isn't constant, you must load the data it points to (all globals are actually pointers): Value* loadedValue = new LoadInst( myGlobal );
Q: How to get actual value of global variables in llvm For example: int x=0; int y=0; where x and y are global variables, and in main() function we do the following: x++; y++; How to get the newest value of global variables x and y in llvm. when I try to do errs()<<g; they give the initial value as @BB0 = global i32 but I need to get the actual value like x=1, by using llvm. A: Assuming you're using LLVM's API: If the global is constant you can access its initialization value directly, for example: Constant* myGlobal = new GlobalVariable( myLlvmModule, myLlvmType, true, GlobalValue::InternalLinkage, initializationValue ); ... Constant* constValue = myGlobal->getInitializer(); And if that value is of e.g. integer type, you can retrieve it like so: ConstantInt* constInt = cast<ConstantInt>( constValue ); int64_t constIntValue = constInt->getSExtValue(); If the global isn't constant, you must load the data it points to (all globals are actually pointers): Value* loadedValue = new LoadInst( myGlobal ); A: A global is basically a pointer. You can get the address in the host program via ExecutionEngine::getGlobalValueAddress and then you can dereference that address in order to get the stored value.
stackoverflow
{ "language": "en", "length": 195, "provenance": "stackexchange_0000F.jsonl.gz:861357", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531246" }
3405d5c5368e48668e393d3b894247b7c0bba80d
Stackoverflow Stackexchange Q: Is there a way to remove repository name from github page link? I created a page for a github repository following these instructions:Getting Started with GitHub Pages. Worked perfectly, the page is already hosted. But I would like to change the page URL, This is the currently URL: http://myusername.github.io/repositoryName/ Is there any way to remove the repository name? (http://myusername.github.io/) I've seen the articles to configure a custom domain, but I think that's not the case. A: To remove the repository name, you'll need to make it a User Page (or an Organization page). Create a repository named myusername.github.io, and commit your content to the master branch. See this help page for more information.
Q: Is there a way to remove repository name from github page link? I created a page for a github repository following these instructions:Getting Started with GitHub Pages. Worked perfectly, the page is already hosted. But I would like to change the page URL, This is the currently URL: http://myusername.github.io/repositoryName/ Is there any way to remove the repository name? (http://myusername.github.io/) I've seen the articles to configure a custom domain, but I think that's not the case. A: To remove the repository name, you'll need to make it a User Page (or an Organization page). Create a repository named myusername.github.io, and commit your content to the master branch. See this help page for more information.
stackoverflow
{ "language": "en", "length": 114, "provenance": "stackexchange_0000F.jsonl.gz:861365", "question_score": "12", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531269" }
1f9f4cf99b2e730c462daab256093ff4d3369a4b
Stackoverflow Stackexchange Q: apple-app-site-association never requested I'm using cordova-universal-links in a mobile app. They're working fine on the Android side, but nothing is happening on iOS. Here's a rundown of everything so far: * *It is being served over HTTPS *It is valid JSON *It is not behind any redirect *The app's certificate does have Associated Domains enabled *The app does have this domain in its .entitlements file *The app has not ever been released yet I'm not sure if Test Flight has anything to do with it, but from tailing server logs, the apple-app-site-association file is never requested. Not when installing the app, not when navigating to the page from a link in Safari, not when navigating to the page from a link in Mail. I am at a complete loss here. Some screenshots: cURL showing application/json and non-redirect status: .entitlements file in Xcode: Associated Domains in Xcode: Associated Domains enabled in Developer Center: AASA Validator from Branch.io:
Q: apple-app-site-association never requested I'm using cordova-universal-links in a mobile app. They're working fine on the Android side, but nothing is happening on iOS. Here's a rundown of everything so far: * *It is being served over HTTPS *It is valid JSON *It is not behind any redirect *The app's certificate does have Associated Domains enabled *The app does have this domain in its .entitlements file *The app has not ever been released yet I'm not sure if Test Flight has anything to do with it, but from tailing server logs, the apple-app-site-association file is never requested. Not when installing the app, not when navigating to the page from a link in Safari, not when navigating to the page from a link in Mail. I am at a complete loss here. Some screenshots: cURL showing application/json and non-redirect status: .entitlements file in Xcode: Associated Domains in Xcode: Associated Domains enabled in Developer Center: AASA Validator from Branch.io:
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:861376", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531295" }
d58ac114a5d6bd6de5b31f50a42337b637b1a981
Stackoverflow Stackexchange Q: bash: mysql: command not found I just installed MySQL server and a MySQL Workbench on my new desktop and trying to use MySQL commands in Git Bash to create some databases locally in folder C:\Users\OmNom\Desktop\code\burger\db (database file located in this folder) with mysql -u root -p but getting this message in return bash: mysql: command not found What am I doing wrong ? Do I have to install something else to make it work? P.S. OS Windows 10 , Node.js installed , server runs fine (I'm able to create databases in workbench). Please help! P.P.S. If you need additional information let me know! A: Copy the path to mysql.exe, and add it to your computer's PATH variable, then open git bash and type winpty mysql -u root -p
Q: bash: mysql: command not found I just installed MySQL server and a MySQL Workbench on my new desktop and trying to use MySQL commands in Git Bash to create some databases locally in folder C:\Users\OmNom\Desktop\code\burger\db (database file located in this folder) with mysql -u root -p but getting this message in return bash: mysql: command not found What am I doing wrong ? Do I have to install something else to make it work? P.S. OS Windows 10 , Node.js installed , server runs fine (I'm able to create databases in workbench). Please help! P.P.S. If you need additional information let me know! A: Copy the path to mysql.exe, and add it to your computer's PATH variable, then open git bash and type winpty mysql -u root -p A: If everything has been followed by the book mysql wont work (as I have tried) but mysqlsh instead will do the work. A: Following steps installed drush in my windows 7 PC's & laptop seamlessly. Please ignore 1st two steps if you already have a web-server-stack running in your machine. * *Install VC11 *Install XAMPP-5.6-VC11 *Install GIT *Install composer *Install drush using compser in git-bash type: composer global require drush/drush *in bash - navigate to sites folder *check environment by typing following commands in bash one by one php --version mysql --version composer --version drush --version Incase any of the above commands returns error, make sure to update environment variables accordingly. Make sure that your environment variables have these entries (depending upon your install location & user name) C:\Users\Admin\AppData\Roaming\Composer\vendor\bin; C:\Users\Admin\AppData\Roaming\Composer\vendor\drush\drush\; C:\Program Files\Git\cmd;C:\ProgramData\ComposerSetup\bin; C:\xampp\mysql\bin; C:\xampp\php; *Finally in sites\defalut\settings.php change the host from localhost to 127.0.0.1 Hope this helps A: This error means that from some reason your shell doesn't recognize the mysql client. It might be one of the following: * *You opened the shell before installing MySQL, so therefore the PATH variable isn't updated on that shell instance. To make sure this is not the case, close the shell and re-open it, try to use the command again. *From some reason the mysql client is not added to the PATH environment variable. Add the directory where mysql exists to the PATH variable using this command, and then try to run the client: set PATH=%PATH%;C:\xampp\php *Maybe you didn't install MySQL client and only installed the server? Can you find the executable somewhere on your computer? A: That's because your MySQL bin program (mysql command tool) not added to your windows PATH system variable. Now this can happen for several reasons, one particular reason could be your installation executable not run as administrator. So to use it with your git bash you can do two things. Approach 1 Go to the bin directory of your MySQL installation using git bash. For example $ cd 'C:\Program Files\MySQL\MySQL Server 5.7\bin' Then run ./mysql -u root -p Approach 2 To make it easier to invoke MySQL programs, you can add the path name of the MySQL bin directory to your Windows system PATH environment variable. To do so follow the MySQL official documentation guideline from here https://dev.mysql.com/doc/mysql-windows-excerpt/5.7/en/mysql-installation-windows-path.html After adding the PATH variable you need to restart your git bash. Now you'll able to run mysql command from anywhere using git bash. A: First make sure that you have installed all the required components. If not then uninstall the installer only and run installer again. Then install all components if you haven't already. Now open Windows Terminal or Command Prompt and travel into MySQL server 8.0's bin directory (i.e., C:\Program Files\MySQL\MySQL Server 8.0\bin). Run $ mysql -u root -p Enter password: and enter the root password If it works then the problem was in only PATH Variable/Environment Variable Add a new variable named MYSQL_HOME in user variables and give path of bin directory of MySQL Server as value in it and also in path variable Just restart your machine and it should work fine.
stackoverflow
{ "language": "en", "length": 648, "provenance": "stackexchange_0000F.jsonl.gz:861397", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531375" }
7e0bf7fef637e7da6c9103a6f6e649d29db8109a
Stackoverflow Stackexchange Q: check your network connection or proxy settings I can't sign in visual studio 2017 i got this message "we could not download a license. please check your network connection or proxy settings". I tried sign in whith my account so i got a message "we could not refresh the credentials for the account operation returned an invalid status code 'forbidden'" I tried also download again visual studio but its not help, and also i opend a new account and its also not help. A: Every effort gone in vain except following method worked like a miracle: * *Sign up at signup.live.com *Choose Get a new email address *After creating an outlook mail, simply click sign in button Visual Studio. *Use these same generated credentials and it will work Still not sure why it worked.
Q: check your network connection or proxy settings I can't sign in visual studio 2017 i got this message "we could not download a license. please check your network connection or proxy settings". I tried sign in whith my account so i got a message "we could not refresh the credentials for the account operation returned an invalid status code 'forbidden'" I tried also download again visual studio but its not help, and also i opend a new account and its also not help. A: Every effort gone in vain except following method worked like a miracle: * *Sign up at signup.live.com *Choose Get a new email address *After creating an outlook mail, simply click sign in button Visual Studio. *Use these same generated credentials and it will work Still not sure why it worked. A: I had this problem after doing a fresh install on of windows 10 to my laptop. It turned out the problem was due the install of VS2017 not going smoothly. Re-running the installer and selecting to repair the installation fixed the issue. If you're interested in the specific details of why it failed, read on... The problem was actually caused by how win 10 installations work. After the initial install, it needed a reset to install some updates, this was all fine. After that reboot a few minutes later, I assume I was all set and started installing the tools I need. However after a few minutes it informed me it needed to upgrade the entire version of windows 10 and this would take about 90 minutes. If I'd allowed it to do this then, I don't think I would have had the problem. However I was at work so decided to take the hit over lunch and in the meantime started setting up the other tools I needed, including vs2017. After the VS2017 installation completed it stated it needed to reboot to complete. This reboot however also triggered the win 10 update. When it finally completed, it left me with a broken vs2017 installation.
stackoverflow
{ "language": "en", "length": 341, "provenance": "stackexchange_0000F.jsonl.gz:861403", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531401" }
f74f82185a356b0d9c51e90f2338a9c69887e54d
Stackoverflow Stackexchange Q: Why does the source code of a website vary when visited from different browsers? Look at the source code of bartzmall.pk from different browsers and you will see different classes added to the html tag for each browser. From firefox <html class="firefox firefox53 otherClasses"> From chrome <html class="webkit chrome chrome58 otherClasses"> From IE <html class="ie ie11 otherClasses"> And from opera <html class="webkit opera opera45 otherClasses"> The class "otherClasses" refer to about 14 other classes that are common for all browsers. How is this website able to change its source code when visited from different browsers? What purpose do these special classes that vary by browser serve? P.S As a side question, what is the sense/wisdom/reason behind adding so many classes to the html tag? A: There's a JS plugin named "modrnizer" (google it) which detects your browser type and capabilities and inserts according classes into your HTML tags, so you can setup CSS rules that respond to the particular differences between browsers using those classes . The modrnizer website itself seems to be broken at the moment, but here is an article that describes how it works: http://html5doctor.com/using-modernizr-to-detect-html5-features-and-provide-fallbacks/
Q: Why does the source code of a website vary when visited from different browsers? Look at the source code of bartzmall.pk from different browsers and you will see different classes added to the html tag for each browser. From firefox <html class="firefox firefox53 otherClasses"> From chrome <html class="webkit chrome chrome58 otherClasses"> From IE <html class="ie ie11 otherClasses"> And from opera <html class="webkit opera opera45 otherClasses"> The class "otherClasses" refer to about 14 other classes that are common for all browsers. How is this website able to change its source code when visited from different browsers? What purpose do these special classes that vary by browser serve? P.S As a side question, what is the sense/wisdom/reason behind adding so many classes to the html tag? A: There's a JS plugin named "modrnizer" (google it) which detects your browser type and capabilities and inserts according classes into your HTML tags, so you can setup CSS rules that respond to the particular differences between browsers using those classes . The modrnizer website itself seems to be broken at the moment, but here is an article that describes how it works: http://html5doctor.com/using-modernizr-to-detect-html5-features-and-provide-fallbacks/
stackoverflow
{ "language": "en", "length": 189, "provenance": "stackexchange_0000F.jsonl.gz:861411", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531421" }
fad7fde047856553b8d1216e9a397f4d8ad4b87c
Stackoverflow Stackexchange Q: Why does ANTLR require all or none alternatives be labeled? I'm new to ANTLR. I just discovered that it is possible to label each alternative in a production like so: foo : a # aLabel | b # bLabel | // ... ; However, I find it unpleasant that all or none alternatives must be labeled. I needed to label just 2 alternatives of a production with 20+ branches recently, and I ended up labelling each of the others # stubLabel. Is there any reason why all or none have to be labeled? A: As soon as you add a label ANTLR4 will no longer generate a context class for that rule but instead individual context classes for each alt. This cannot be mixed (e.g. having a context for the entire rule and at the same time contexts for only some of the alts). Once you start using labels and the rule context is no longer generated you have to generate contexts for all alts or something would be missing.
Q: Why does ANTLR require all or none alternatives be labeled? I'm new to ANTLR. I just discovered that it is possible to label each alternative in a production like so: foo : a # aLabel | b # bLabel | // ... ; However, I find it unpleasant that all or none alternatives must be labeled. I needed to label just 2 alternatives of a production with 20+ branches recently, and I ended up labelling each of the others # stubLabel. Is there any reason why all or none have to be labeled? A: As soon as you add a label ANTLR4 will no longer generate a context class for that rule but instead individual context classes for each alt. This cannot be mixed (e.g. having a context for the entire rule and at the same time contexts for only some of the alts). Once you start using labels and the rule context is no longer generated you have to generate contexts for all alts or something would be missing. A: OK, I believe I've figured this out. Presumably to save space, the node corresponding to each label is subclassed from the node corresponding to the production, rather than being a child of it. expression : // ... | foo # namedMethodInvocation ; becomes
stackoverflow
{ "language": "en", "length": 215, "provenance": "stackexchange_0000F.jsonl.gz:861461", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44531576" }