id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
58f4fd9e552e3bfbd8dbfa190efe3193cf9a271d
|
Stackoverflow Stackexchange
Q: Getting AngularUI Mask to work with Angular Material I have a project that utilizes the Angular Material framework in which I'm trying to use Angular UI Mask for phone number formatting on an input field. I'm able to get each one to work individually (the material label performs properly without the mask, and the mask works just fine without the material label). I'm not able to get them to work together, where I would like the placeholder in the input field to read "Phone Number" and then have that placeholder transition to a label above the input (as a material input would) and then have the mask appear in the textbox. The only way I can get these to work, it shows the placeholder and the label at the same time.
Code is simply:
<md-input-container>
<label>Phone Number</label>
<input aria-label="Phone Number" id="phone" name="phone" type="text" ng-value="phone" ng-model="phone" ui-mask="(999) 999-9999" ui-mask-placeholder placeholder="Phone Number">
</md-input-container>
Here's the Plunker: http://plnkr.co/edit/fXIRnKwdnBPwEODYuTBP
A: Add:
ui-options={addDefaultPlaceholder:false}
This hides the input mask until onBlur occurs.
It works with or without the label.
Here's a plunker: http://plnkr.co/edit/TQxPE2XLG5on1JSKHRHq?p=preview
|
Q: Getting AngularUI Mask to work with Angular Material I have a project that utilizes the Angular Material framework in which I'm trying to use Angular UI Mask for phone number formatting on an input field. I'm able to get each one to work individually (the material label performs properly without the mask, and the mask works just fine without the material label). I'm not able to get them to work together, where I would like the placeholder in the input field to read "Phone Number" and then have that placeholder transition to a label above the input (as a material input would) and then have the mask appear in the textbox. The only way I can get these to work, it shows the placeholder and the label at the same time.
Code is simply:
<md-input-container>
<label>Phone Number</label>
<input aria-label="Phone Number" id="phone" name="phone" type="text" ng-value="phone" ng-model="phone" ui-mask="(999) 999-9999" ui-mask-placeholder placeholder="Phone Number">
</md-input-container>
Here's the Plunker: http://plnkr.co/edit/fXIRnKwdnBPwEODYuTBP
A: Add:
ui-options={addDefaultPlaceholder:false}
This hides the input mask until onBlur occurs.
It works with or without the label.
Here's a plunker: http://plnkr.co/edit/TQxPE2XLG5on1JSKHRHq?p=preview
|
stackoverflow
|
{
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:840441",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44465753"
}
|
e242e2d59566616be24951bf1a1e92c533b2d479
|
Stackoverflow Stackexchange
Q: Testing that Django templates render correctly I have an index.html that includes a nav.html which has a url tag that points to a rout name that doesn't exist. When I run the index view test the response code is 200 and the test passes. If I manage.py runserver and navigate to the index in my browser I get a NoReverseMatch error message page. When I remove the include from index.html and put the contents of nav.html directly into index.html the test fails as expected.
How do I write a test that will catch the problem in the included template?
nav.html
{% url 'project:wrong_name' %}
index.html
{% include 'project/nav.html' %}
views.py
def index(request):
return render(request, 'project/index.html')
tests.py
def test_index_view(client):
response = client.get(reverse('project:index')
assert response.status_code == 200
settings.py
...
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
...
virtualenv (abbreviated):
python==3.6.1
Django==1.11
pytest-django==3.1.2
|
Q: Testing that Django templates render correctly I have an index.html that includes a nav.html which has a url tag that points to a rout name that doesn't exist. When I run the index view test the response code is 200 and the test passes. If I manage.py runserver and navigate to the index in my browser I get a NoReverseMatch error message page. When I remove the include from index.html and put the contents of nav.html directly into index.html the test fails as expected.
How do I write a test that will catch the problem in the included template?
nav.html
{% url 'project:wrong_name' %}
index.html
{% include 'project/nav.html' %}
views.py
def index(request):
return render(request, 'project/index.html')
tests.py
def test_index_view(client):
response = client.get(reverse('project:index')
assert response.status_code == 200
settings.py
...
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
...
virtualenv (abbreviated):
python==3.6.1
Django==1.11
pytest-django==3.1.2
|
stackoverflow
|
{
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:840452",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44465775"
}
|
3ddae9f86dec142fd12d21c37779a5db021dee89
|
Stackoverflow Stackexchange
Q: getOrDefault for Map Trying to getOrDefault for Map like:
String test = map.getOrDefault("test", "")
But it gives me an error "Required ? but got a String". Anyway to get around this?
A: The values of a Map<String, ?> could be of any type.
getOrDefault requires the second parameter to be of the same type as the values; there is no value other than null which can satisfy this, because you don't know if that ? is String, Integer or whatever.
Because you are only retrieving a value from the map, you can safely cast to a Map<String, Object>:
Object value = ((Map<String, Object>) map).getOrDefault("key", "");
This is because you are not putting any value into the map which would make calls unsafe later; and any value type can be stored safely in an Object reference.
|
Q: getOrDefault for Map Trying to getOrDefault for Map like:
String test = map.getOrDefault("test", "")
But it gives me an error "Required ? but got a String". Anyway to get around this?
A: The values of a Map<String, ?> could be of any type.
getOrDefault requires the second parameter to be of the same type as the values; there is no value other than null which can satisfy this, because you don't know if that ? is String, Integer or whatever.
Because you are only retrieving a value from the map, you can safely cast to a Map<String, Object>:
Object value = ((Map<String, Object>) map).getOrDefault("key", "");
This is because you are not putting any value into the map which would make calls unsafe later; and any value type can be stored safely in an Object reference.
A: The implementation of this method returns the given default value (generics - can be any type) if not found (AKA null).
default V getOrDefault(Object key, V defaultValue) {
V v;
return (((v = get(key)) != null) || containsKey(key))
? v
: defaultValue;
}
documentation link attached: https://docs.oracle.com/javase/8/docs/api/java/util/Map.html#getOrDefault-java.lang.Object-V-
|
stackoverflow
|
{
"language": "en",
"length": 184,
"provenance": "stackexchange_0000F.jsonl.gz:840481",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44465854"
}
|
49ce94dc7fa59c2450049a37d288c6a76d2b1523
|
Stackoverflow Stackexchange
Q: Two data.tables number of matching columns If I have two data.tables, dt1, and dt2, I want the number of matches between columns using an if then sort of logic. If dt1$V1==dt$V2, then does dt$V1 == dt$V2? But it is key for this if-then statement to group by the matches in dt1$V1 == dt$V2. I would like to use data.table for its efficiency, since I actually have a large dataset.
dt1 <- data.table(c("a","b","c","d","e"), c(1:5))
dt2 <- data.table(c("a","d","e","f","g"), c(3:7))
In this dummy example, there are 3 matches between the V1s, but only two within those groups for V2s. So the answer (using nrow perhaps, if I subset), would be 2.
A: I suppose you are looking for fintersect:
fintersect(dt1,dt2)
gives:
V1 V2
1: d 4
2: e 5
To get the number of rows, add [, .N]:
fintersect(dt1,dt2)[, .N]
which gives:
[1] 2
|
Q: Two data.tables number of matching columns If I have two data.tables, dt1, and dt2, I want the number of matches between columns using an if then sort of logic. If dt1$V1==dt$V2, then does dt$V1 == dt$V2? But it is key for this if-then statement to group by the matches in dt1$V1 == dt$V2. I would like to use data.table for its efficiency, since I actually have a large dataset.
dt1 <- data.table(c("a","b","c","d","e"), c(1:5))
dt2 <- data.table(c("a","d","e","f","g"), c(3:7))
In this dummy example, there are 3 matches between the V1s, but only two within those groups for V2s. So the answer (using nrow perhaps, if I subset), would be 2.
A: I suppose you are looking for fintersect:
fintersect(dt1,dt2)
gives:
V1 V2
1: d 4
2: e 5
To get the number of rows, add [, .N]:
fintersect(dt1,dt2)[, .N]
which gives:
[1] 2
A: Well this is not pretty, but, it works:
sum(dt1[V1 %in% dt2$V1]$V2 == dt2[V1 %in% dt1[V1 %in% dt2$V1]$V1]$V2)
Just read your comment, if you want a data.table with the correct combinations you can make it even longer, like this:
dt1[V1 %in% dt2$V1][dt1[V1 %in% dt2$V1]$V2 == dt2[V1 %in% dt1[V1 %in% dt2$V1]$V1]$V2]
V1 V2
1: d 4
2: e 5
I'm definitely looking forward to see other answers :)
A: We can just do a join
dt1[dt2, on = names(dt1), nomatch = 0]
# V1 V2
#1: d 4
#2: e 5
or inner_join from dplyr
library(dplyr)
inner_join(dt1, dt2)
# V1 V2
#1 d 4
#2 e 5
Or with merge
merge(dt1, dt2)
# V1 V2
#1: d 4
#2: e 5
For all of the above the number of matches can be find by nrow
nrow(merge(dt1, dt2))
|
stackoverflow
|
{
"language": "en",
"length": 278,
"provenance": "stackexchange_0000F.jsonl.gz:840494",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44465902"
}
|
0036ddaf77e8375edad26bccdf659f48819da42b
|
Stackoverflow Stackexchange
Q: How can I convert a trained Tensorflow model to Keras? I have a trained Tensorflow model and weights vector which have been exported to protobuf and weights files respectively.
How can I convert these to JSON or YAML and HDF5 files which can be used by Keras?
I have the code for the Tensorflow model, so it would also be acceptable to convert the tf.Session to a keras model and save that in code.
A: I think the callback in keras is also a solution.
The ckpt file can be saved by TF with:
saver = tf.train.Saver()
saver.save(sess, checkpoint_name)
and to load checkpoint in Keras, you need a callback class as follow:
class RestoreCkptCallback(keras.callbacks.Callback):
def __init__(self, pretrained_file):
self.pretrained_file = pretrained_file
self.sess = keras.backend.get_session()
self.saver = tf.train.Saver()
def on_train_begin(self, logs=None):
if self.pretrian_model_path:
self.saver.restore(self.sess, self.pretrian_model_path)
print('load weights: OK.')
Then in your keras script:
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
restore_ckpt_callback = RestoreCkptCallback(pretrian_model_path='./XXXX.ckpt')
model.fit(x_train, y_train, batch_size=128, epochs=20, callbacks=[restore_ckpt_callback])
That will be fine.
I think it is easy to implement and hope it helps.
|
Q: How can I convert a trained Tensorflow model to Keras? I have a trained Tensorflow model and weights vector which have been exported to protobuf and weights files respectively.
How can I convert these to JSON or YAML and HDF5 files which can be used by Keras?
I have the code for the Tensorflow model, so it would also be acceptable to convert the tf.Session to a keras model and save that in code.
A: I think the callback in keras is also a solution.
The ckpt file can be saved by TF with:
saver = tf.train.Saver()
saver.save(sess, checkpoint_name)
and to load checkpoint in Keras, you need a callback class as follow:
class RestoreCkptCallback(keras.callbacks.Callback):
def __init__(self, pretrained_file):
self.pretrained_file = pretrained_file
self.sess = keras.backend.get_session()
self.saver = tf.train.Saver()
def on_train_begin(self, logs=None):
if self.pretrian_model_path:
self.saver.restore(self.sess, self.pretrian_model_path)
print('load weights: OK.')
Then in your keras script:
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
restore_ckpt_callback = RestoreCkptCallback(pretrian_model_path='./XXXX.ckpt')
model.fit(x_train, y_train, batch_size=128, epochs=20, callbacks=[restore_ckpt_callback])
That will be fine.
I think it is easy to implement and hope it helps.
A: Francois Chollet, the creator of keras, stated in 04/2017 "you cannot turn an arbitrary TensorFlow checkpoint into a Keras model. What you can do, however, is build an equivalent Keras model then load into this Keras model the weights"
, see https://github.com/keras-team/keras/issues/5273 . To my knowledge this hasn't changed.
A small example:
First, you can extract the weights of a tensorflow checkpoint like this
PATH_REL_META = r'checkpoint1.meta'
# start tensorflow session
with tf.Session() as sess:
# import graph
saver = tf.train.import_meta_graph(PATH_REL_META)
# load weights for graph
saver.restore(sess, PATH_REL_META[:-5])
# get all global variables (including model variables)
vars_global = tf.global_variables()
# get their name and value and put them into dictionary
sess.as_default()
model_vars = {}
for var in vars_global:
try:
model_vars[var.name] = var.eval()
except:
print("For var={}, an exception occurred".format(var.name))
It might also be of use to export the tensorflow model for use in tensorboard, see https://stackoverflow.com/a/43569991/2135504
Second, you build you keras model as usually and finalize it by "model.compile". Pay attention that you need to give you define each layer by name and add it to the model after that, e.g.
layer_1 = keras.layers.Conv2D(6, (7,7), activation='relu', input_shape=(48,48,1))
net.add(layer_1)
...
net.compile(...)
Third, you can set the weights with the tensorflow values, e.g.
layer_1.set_weights([model_vars['conv7x7x1_1/kernel:0'], model_vars['conv7x7x1_1/bias:0']])
A: Currently, there is no direct in-built support in Tensorflow or Keras to convert the frozen model or the checkpoint file to hdf5 format.
But since you have mentioned that you have the code of Tensorflow model, you will have to rewrite that model's code in Keras. Then, you will have to read the values of your variables from the checkpoint file and assign it to Keras model using layer.load_weights(weights) method.
More than this methodology, I would suggest to you to do the training directly in Keras as it claimed that Keras' optimizers are 5-10% times faster than Tensorflow's optimizers. Other way is to write your code in Tensorflow with tf.contrib.keras module and save the file directly in hdf5 format.
A: Unsure if this is what you are looking for, but I happened to just do the same with the newly released keras support in TF 1.2. You can find more on the API here: https://www.tensorflow.org/api_docs/python/tf/contrib/keras
To save you a little time, I also found that I had to include keras modules as shown below with the additional python.keras appended to what is shown in the API docs.
from tensorflow.contrib.keras.python.keras.models import Sequential
Hope that helps get you where you want to go. Essentially once integrated in, you then just handle your model/weight export as usual.
|
stackoverflow
|
{
"language": "en",
"length": 585,
"provenance": "stackexchange_0000F.jsonl.gz:840540",
"question_score": "28",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466066"
}
|
2a3f7903eae375d39ae5ab2b099e397304465ad9
|
Stackoverflow Stackexchange
Q: Zookeeper Node vs. zNode I just start reading about zookeeper. And I am getting confused about the data replication and data module.
The ZooKeeper ensemble will contain multiple nodes(Machine) with one leader and others as followers.
The data module is a tree structure with each node as znode.
How does those two structure work together? Is the znode and node refer to the same thing?
I am trying to understand it as each node in ZooKeeper ensemble will have the same data module that contains znode. So the znode is actually, data on nodes. Is that right?
A: The two concepts don't really relate to each other in any way. A znode is part of the data model. Essentially, it's a path on a tree-like structure that represents a piece of data stored in ZooKeeper. Conversely, a node is a general systems term you could just replace with server.
Unfortunately, we don't have enough terms in computer science, so terminology can become confusing. But this is simply the difference between a tree node (in the data model) and a cluster node (in the ensemble).
|
Q: Zookeeper Node vs. zNode I just start reading about zookeeper. And I am getting confused about the data replication and data module.
The ZooKeeper ensemble will contain multiple nodes(Machine) with one leader and others as followers.
The data module is a tree structure with each node as znode.
How does those two structure work together? Is the znode and node refer to the same thing?
I am trying to understand it as each node in ZooKeeper ensemble will have the same data module that contains znode. So the znode is actually, data on nodes. Is that right?
A: The two concepts don't really relate to each other in any way. A znode is part of the data model. Essentially, it's a path on a tree-like structure that represents a piece of data stored in ZooKeeper. Conversely, a node is a general systems term you could just replace with server.
Unfortunately, we don't have enough terms in computer science, so terminology can become confusing. But this is simply the difference between a tree node (in the data model) and a cluster node (in the ensemble).
|
stackoverflow
|
{
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:840582",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466182"
}
|
7ff49dec3a4a26ed3e280058ec4bec21f46a1449
|
Stackoverflow Stackexchange
Q: Postgres upsert using results from select I'm trying to upsert with postgres using values from a select. It looks like:
INSERT INTO foo (a, b, c)
SELECT a_, b_, c_
-- hairy sql
ON CONFLICT (...condition...) DO UPDATE
SET "c"=???
On conflict, I want to use one of the values from my select statement, but I can't find the right syntax to alias it. How can I do this with Postgres?
A: Use the excluded keyword:
INSERT INTO foo (a, b, c)
SELECT a_, b_, c_
-- hairy sql
ON CONFLICT (...condition...) DO UPDATE
SET c = excluded.c;
|
Q: Postgres upsert using results from select I'm trying to upsert with postgres using values from a select. It looks like:
INSERT INTO foo (a, b, c)
SELECT a_, b_, c_
-- hairy sql
ON CONFLICT (...condition...) DO UPDATE
SET "c"=???
On conflict, I want to use one of the values from my select statement, but I can't find the right syntax to alias it. How can I do this with Postgres?
A: Use the excluded keyword:
INSERT INTO foo (a, b, c)
SELECT a_, b_, c_
-- hairy sql
ON CONFLICT (...condition...) DO UPDATE
SET c = excluded.c;
|
stackoverflow
|
{
"language": "en",
"length": 99,
"provenance": "stackexchange_0000F.jsonl.gz:840599",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466232"
}
|
4c251519ad8bb38eb4b38cec94cdb5ffb996d716
|
Stackoverflow Stackexchange
Q: Laravel Validation: Custom rules on arrays (before_if) I am trying to write a custom before_if rule that mirrors the required_if rule. What I am trying to accomplish is making sure that the "primary" name on an account is not a minor (18+ years), but I am not quiet sure how to deal with this because I am looking at an array.
My rule looks like this
'names.*.birthdate' => 'required|date|before_if:-18 years,names.*.type,0' // Type 0 is primary name
However, I am not quite sure what to do in my extend logic. Specifically, I am not sure how to find the index of the current name I am on. I know this is possible because I already do this multiple times using required_if.
Validator::extend('before_if', function($attribute, $value, $parameters, $validator){
// Not sure how to find the current index
});
|
Q: Laravel Validation: Custom rules on arrays (before_if) I am trying to write a custom before_if rule that mirrors the required_if rule. What I am trying to accomplish is making sure that the "primary" name on an account is not a minor (18+ years), but I am not quiet sure how to deal with this because I am looking at an array.
My rule looks like this
'names.*.birthdate' => 'required|date|before_if:-18 years,names.*.type,0' // Type 0 is primary name
However, I am not quite sure what to do in my extend logic. Specifically, I am not sure how to find the index of the current name I am on. I know this is possible because I already do this multiple times using required_if.
Validator::extend('before_if', function($attribute, $value, $parameters, $validator){
// Not sure how to find the current index
});
|
stackoverflow
|
{
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:840608",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466263"
}
|
241b5535b701f4ab3207b384d52eabae513b8c25
|
Stackoverflow Stackexchange
Q: Is `require jquery_ujs` still needed in Rails 5.1? I am installing jQuery in my 5.1.x Rails app via the jquery-rails gem.
In the gem setup, they recommend to add these lines to application.js by default:
//= require jquery
//= require jquery_ujs
But, in a Rails 5.1.x app, you have already this line which doesn't depend on jQuery anymore:
//= require rails-ujs
I suppose both are doing the exact same thing and one is not needed.
Should I keep both anyway or should I prefer only jquery_ujs or only rails-ujs?
A: jquery-ujs is a thing of the past as of Rails 5.1, you don't need it.
|
Q: Is `require jquery_ujs` still needed in Rails 5.1? I am installing jQuery in my 5.1.x Rails app via the jquery-rails gem.
In the gem setup, they recommend to add these lines to application.js by default:
//= require jquery
//= require jquery_ujs
But, in a Rails 5.1.x app, you have already this line which doesn't depend on jQuery anymore:
//= require rails-ujs
I suppose both are doing the exact same thing and one is not needed.
Should I keep both anyway or should I prefer only jquery_ujs or only rails-ujs?
A: jquery-ujs is a thing of the past as of Rails 5.1, you don't need it.
A: As of Rails 5.1 jQuery is no longer required for UJS (unobtrusive javascript). So if you have no need of jQuery in your rails app, you can just use
//= require rails-ujs
On the other hand, if you do use jQuery in your app, and use the jquery-rails gem, and you should NOT require rails-ujs, but should instead use:
//= require jquery
//= require jquery_ujs
Requiring jquery_ujs along with jQuery can cause issues in the app, and you may see the following JS console error:
Uncaught Error: jquery-ujs has already been loaded!
|
stackoverflow
|
{
"language": "en",
"length": 199,
"provenance": "stackexchange_0000F.jsonl.gz:840657",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466430"
}
|
32c195dedb41679e6b9edbd5f50fb44124c2297c
|
Stackoverflow Stackexchange
Q: How to generate HTML5 valid Javadoc? I have generated HTML5 JavaDoc in the Eclipse Luna clicking Project -> Generate Javadoc...
The output is well-formated standard Javadoc HTML file, however, it's not HTML5 valid. For example, I find it not appropriate since I would like to upload the whole documentation to a website.
I have tested the generated files with W3 Validator.
How to force the generator to produce the HTML5 valid file including <!DOCTYPE html> at the beginning of the page and avoiding obsolete elements such as frameset or the incomplete elements at all.
A: As far as I can tell this will be available in jdk-9 via jep-224. In java 8 current type is html4.
|
Q: How to generate HTML5 valid Javadoc? I have generated HTML5 JavaDoc in the Eclipse Luna clicking Project -> Generate Javadoc...
The output is well-formated standard Javadoc HTML file, however, it's not HTML5 valid. For example, I find it not appropriate since I would like to upload the whole documentation to a website.
I have tested the generated files with W3 Validator.
How to force the generator to produce the HTML5 valid file including <!DOCTYPE html> at the beginning of the page and avoiding obsolete elements such as frameset or the incomplete elements at all.
A: As far as I can tell this will be available in jdk-9 via jep-224. In java 8 current type is html4.
A: On current (Photon) Eclipse select your project in Package Explorer view and go to :
Project > Generate Javadoc ...
This brings you to page 1 of the Javadoc set-up.
After selecting the Javadoc command path, the project, the visibility and the output folder, click the Next button twice to get to page 3 of the set-up where you may enter various options for the Javadoc.exe command.
Enter -html5 as your VM option.
It is best to enter -noqualifier all in the Javadoc options box in order to remove qualifier prefixes from each class. Otherwise the objects in the final Javadoc's Modifier & Type and Method columns would be given as java.lang.String rather than String and ArrayList as java.util.ArrayList. (It gets longer and sillier still for HashMap classes with proprietary classes) The effect of this is to slow down readability of the docs and reduce the available width of the (most important) Description column.
You can also add other options for things like custom tags here, e.g.
-tag custom.date:a:"Date: "
so that the tag
@custom.date: November 2018
shows as
Date: November 2018
in the final Javadoc.
Finally click Finish button to start generating javadocs in HTML5 format.
|
stackoverflow
|
{
"language": "en",
"length": 315,
"provenance": "stackexchange_0000F.jsonl.gz:840674",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466517"
}
|
2b9e5bb1abdb161b6be46c76ea1e5fb804e0ef1b
|
Stackoverflow Stackexchange
Q: SKAction applyForce not working? I want to mimic gravity without using physicsBody.
However when I do this
let applyForce = SKAction.applyForce(CGVector(dx:0,dy:-9.8), duration:duration)
sprite.run(applyForce)
Nothing happens. Why is that so?
A: The function you are calling is run from your sprite, but targets the sprite's physics body.
You would need to create your own version of applyForce() that doesn't require a physics body.
|
Q: SKAction applyForce not working? I want to mimic gravity without using physicsBody.
However when I do this
let applyForce = SKAction.applyForce(CGVector(dx:0,dy:-9.8), duration:duration)
sprite.run(applyForce)
Nothing happens. Why is that so?
A: The function you are calling is run from your sprite, but targets the sprite's physics body.
You would need to create your own version of applyForce() that doesn't require a physics body.
|
stackoverflow
|
{
"language": "en",
"length": 63,
"provenance": "stackexchange_0000F.jsonl.gz:840679",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466554"
}
|
45684a8806235f6627caf1d3bd4e0036b0b8c27c
|
Stackoverflow Stackexchange
Q: default branch for pull request My company uses github. When I want to do a pull request I need to make my PR from my fork to the main repo into the staging branch. By default my PRs point to the master branch, so for every pull request I have to change which branch I'm merging into. I know that you can set the default branch in github. I want the default branch to remain master, but I want my pull requests to point to staging by default. Is that possible?
In the image below I don't want to have to change base: master to base: staging every time. The bigger pain is when I forget to change it to staging.
A: Go to settings page, branches, ... and select the default branch.
|
Q: default branch for pull request My company uses github. When I want to do a pull request I need to make my PR from my fork to the main repo into the staging branch. By default my PRs point to the master branch, so for every pull request I have to change which branch I'm merging into. I know that you can set the default branch in github. I want the default branch to remain master, but I want my pull requests to point to staging by default. Is that possible?
In the image below I don't want to have to change base: master to base: staging every time. The bigger pain is when I forget to change it to staging.
A: Go to settings page, branches, ... and select the default branch.
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:840701",
"question_score": "18",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466618"
}
|
fb024a12e2fbe5bc1dfab0787c467ee19b63e1e0
|
Stackoverflow Stackexchange
Q: Gradle Project Version from build.gradle via Shell Bash Script I want to be able to grab the build.gradle version via a bash shell script that happens post build. How would I go about getting this?
For reference, in a Maven project, I achieve this task by using the following commmand: mvn help:evaluate -Dexpression=project.version | grep -e '^[^\[]'. What is it's equivalent for Gradle?
A: Either create a task in your build script that prints the version in the execution phase, then call that task also giving -q and you have only your version. Something like task printVersion { doLast { logger.quiet version } }.
If you don't want to modify your buildscript, you can write an init script instead that you either specify with -i manually when you need it or in ~/.gradle/init.d/ for always being applied that adds this task, then again call it giving -q.
|
Q: Gradle Project Version from build.gradle via Shell Bash Script I want to be able to grab the build.gradle version via a bash shell script that happens post build. How would I go about getting this?
For reference, in a Maven project, I achieve this task by using the following commmand: mvn help:evaluate -Dexpression=project.version | grep -e '^[^\[]'. What is it's equivalent for Gradle?
A: Either create a task in your build script that prints the version in the execution phase, then call that task also giving -q and you have only your version. Something like task printVersion { doLast { logger.quiet version } }.
If you don't want to modify your buildscript, you can write an init script instead that you either specify with -i manually when you need it or in ~/.gradle/init.d/ for always being applied that adds this task, then again call it giving -q.
A: The properties task can also do this.
./gradlew properties | grep ^version:
|
stackoverflow
|
{
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:840726",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466728"
}
|
5c857d932f297e77f646fe9cc6c9e0a1cd0f8a70
|
Stackoverflow Stackexchange
Q: Why "go test -v" doesn't see the GOPATH or GOPATH but "go env" does? The application works fine when I run it locally with:
$ dev_appserver.py app.yaml
However, when I attempt to run tests, the ENV doesn't seem to be set.
$ go test -v
skincare.go:6:5: cannot find package "appengine" in any of:
/usr/local/go/src/appengine (from $GOROOT)
/Users/bryan/go/src/appengine (from $GOPATH)
skincare.go:7:5: cannot find package "appengine/datastore" in any of:
/usr/local/go/src/appengine/datastore (from $GOROOT)
/Users/bryan/go/src/appengine/datastore (from $GOPATH)
skincare.go:8:5: cannot find package "appengine/user" in any of:
/usr/local/go/src/appengine/user (from $GOROOT)
/Users/bryan/go/src/appengine/user (from $GOPATH)
$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/bryan/go/"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GO15VENDOREXPERIMENT="1"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fno-common"
CXX="clang++"
CGO_ENABLED="1"
A: $GOPATH should not not contain the src part of the path. So instead of pointing to /Users/bryan/go/src/ it should point to /Users/bryan/go.
|
Q: Why "go test -v" doesn't see the GOPATH or GOPATH but "go env" does? The application works fine when I run it locally with:
$ dev_appserver.py app.yaml
However, when I attempt to run tests, the ENV doesn't seem to be set.
$ go test -v
skincare.go:6:5: cannot find package "appengine" in any of:
/usr/local/go/src/appengine (from $GOROOT)
/Users/bryan/go/src/appengine (from $GOPATH)
skincare.go:7:5: cannot find package "appengine/datastore" in any of:
/usr/local/go/src/appengine/datastore (from $GOROOT)
/Users/bryan/go/src/appengine/datastore (from $GOPATH)
skincare.go:8:5: cannot find package "appengine/user" in any of:
/usr/local/go/src/appengine/user (from $GOROOT)
/Users/bryan/go/src/appengine/user (from $GOPATH)
$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/bryan/go/"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GO15VENDOREXPERIMENT="1"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fno-common"
CXX="clang++"
CGO_ENABLED="1"
A: $GOPATH should not not contain the src part of the path. So instead of pointing to /Users/bryan/go/src/ it should point to /Users/bryan/go.
A: I suspect your tests are failing because you're invoking them directly with go test rather than using the App Engine SDK's test runner, goapp test, which will link you to the appengine package when it runs.
|
stackoverflow
|
{
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:840747",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466785"
}
|
7399de0844ddebd44e16b7970e5c8ae288ae6928
|
Stackoverflow Stackexchange
Q: Split line based on regex in Julia I'm interested in splitting a line using a regular expression in Julia. My input is a corpus in Blei's LDA-C format consisting of docId wordID : wordCNT For example a document with five words is represented as follows:
186 0:1 12:1 15:2 3:1 4:1
I'm looking for a way to aggregate words and their counts into separate arrays, i.e. my desired output:
words = [0, 12, 15, 3, 4]
counts = [1, 1, 2, 1, 1]
I've tried using m = match(r"(\d+):(\d+)",line). However, it only finds the first pair 0:1. I'm looking for something similar to Python's re.compile(r'[ :]').split(line). How would I split a line based on regex in Julia?
A: There's no need to use regex here; Julia's split function allows using multiple characters to define where the splits should occur:
julia> split(line, [':',' '])
11-element Array{SubString{String},1}:
"186"
"0"
"1"
"12"
"1"
"15"
"2"
"3"
"1"
"4"
"1"
julia> words = v[2:2:end]
5-element Array{SubString{String},1}:
"0"
"12"
"15"
"3"
"4"
julia> counts = v[3:2:end]
5-element Array{SubString{String},1}:
"1"
"1"
"2"
"1"
"1"
|
Q: Split line based on regex in Julia I'm interested in splitting a line using a regular expression in Julia. My input is a corpus in Blei's LDA-C format consisting of docId wordID : wordCNT For example a document with five words is represented as follows:
186 0:1 12:1 15:2 3:1 4:1
I'm looking for a way to aggregate words and their counts into separate arrays, i.e. my desired output:
words = [0, 12, 15, 3, 4]
counts = [1, 1, 2, 1, 1]
I've tried using m = match(r"(\d+):(\d+)",line). However, it only finds the first pair 0:1. I'm looking for something similar to Python's re.compile(r'[ :]').split(line). How would I split a line based on regex in Julia?
A: There's no need to use regex here; Julia's split function allows using multiple characters to define where the splits should occur:
julia> split(line, [':',' '])
11-element Array{SubString{String},1}:
"186"
"0"
"1"
"12"
"1"
"15"
"2"
"3"
"1"
"4"
"1"
julia> words = v[2:2:end]
5-element Array{SubString{String},1}:
"0"
"12"
"15"
"3"
"4"
julia> counts = v[3:2:end]
5-element Array{SubString{String},1}:
"1"
"1"
"2"
"1"
"1"
A: I discovered the eachmatch method that returns an iterator over the regex matches. An alternative solution is to iterate over each match:
words, counts = Int64[], Int64[]
for m in eachmatch(r"(\d+):(\d+)", line)
wd, cnt = m.captures
push!(words, parse(Int64, wd))
push!(counts, parse(Int64, cnt))
end
A: As Matt B. mentions, there's no need for a Regex here as the Julia lib split() can use an array of chars.
However - when there is a need for Regex - the same split() function just works, similar to what others suggest here:
line = "186 0:1 12:1 15:2 3:1 4:1"
s = split(line, r":| ")
words = s[2:2:end]
counts = s[3:2:end]
I've recently had to do exactly that in some Unicode processing code (where the split chars - where a "combined character", thus not something that can fit in julia 'single-quotes') meaning:
split_chars = ["bunch","of","random","delims"]
line = "line_with_these_delims_in_the_middle"
r_split = Regex( join(split_chars, "|") )
split( line, r_split )
|
stackoverflow
|
{
"language": "en",
"length": 334,
"provenance": "stackexchange_0000F.jsonl.gz:840751",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466792"
}
|
fe87ab07c1a970a00ca562977afc0812da54db24
|
Stackoverflow Stackexchange
Q: Libgdx textfield: how to set font size? Here
TextField.TextFieldStyle textFieldStyle = skin.get(TextField.TextFieldStyle.class);
textFieldStyle.font.scale(1.6f);
I can't find font.scale();
my code
username = new TextField("", skin);
username.setMessageText("");
A: You need to get the data from the font first, then you can set the scale. But the recommended way is to create different sizes of the same font.
font.getData().setScale(1.0f);
Here is a link to same question Changing font size in skin
|
Q: Libgdx textfield: how to set font size? Here
TextField.TextFieldStyle textFieldStyle = skin.get(TextField.TextFieldStyle.class);
textFieldStyle.font.scale(1.6f);
I can't find font.scale();
my code
username = new TextField("", skin);
username.setMessageText("");
A: You need to get the data from the font first, then you can set the scale. But the recommended way is to create different sizes of the same font.
font.getData().setScale(1.0f);
Here is a link to same question Changing font size in skin
|
stackoverflow
|
{
"language": "en",
"length": 69,
"provenance": "stackexchange_0000F.jsonl.gz:840772",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466842"
}
|
c140ded3817c4629bc2e1e61627923b53a648c26
|
Stackoverflow Stackexchange
Q: ASP.NET Core DbContext injection I have a ConfigurationDbContext that I am trying to use. It has multiple parameters, DbContextOptions and ConfigurationStoreOptions.
How can I add this DbContext to my services in ASP.NET Core?
I have attempted the following in my Startup.cs:
ConfigureServices
....
services.AddDbContext<ConfigurationDbContext>(BuildDbContext(connString));
....
private ConfigurationDbContext BuildDbContext(string connString)
{
var builder = new DbContextOptionsBuilder<ConfigurationDbContext>();
builder.UseSqlServer(connString);
var options = builder.Options;
return new ConfigurationDbContext(options, new ConfigurationStoreOptions());
}
A: You can use this in startup.cs.
Detail information : https://learn.microsoft.com/en-us/ef/core/miscellaneous/configuring-dbcontext
Detail Example : Getting started with ASP.NET Core MVC and Entity Framework Core
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddDbContext<ApplicationDbContext>(options =>options.
UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
}
|
Q: ASP.NET Core DbContext injection I have a ConfigurationDbContext that I am trying to use. It has multiple parameters, DbContextOptions and ConfigurationStoreOptions.
How can I add this DbContext to my services in ASP.NET Core?
I have attempted the following in my Startup.cs:
ConfigureServices
....
services.AddDbContext<ConfigurationDbContext>(BuildDbContext(connString));
....
private ConfigurationDbContext BuildDbContext(string connString)
{
var builder = new DbContextOptionsBuilder<ConfigurationDbContext>();
builder.UseSqlServer(connString);
var options = builder.Options;
return new ConfigurationDbContext(options, new ConfigurationStoreOptions());
}
A: You can use this in startup.cs.
Detail information : https://learn.microsoft.com/en-us/ef/core/miscellaneous/configuring-dbcontext
Detail Example : Getting started with ASP.NET Core MVC and Entity Framework Core
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddDbContext<ApplicationDbContext>(options =>options.
UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
}
A: In order to register DbContext as a service in IServiceCollection you have two options:(we assume that you are going to connect to a SQL Server database)
Using AddDbContext<>
services.AddDbContext<YourDbContext>(o=>o.UseSqlServer(Your Connection String));
Using AddDbContextPool<>
services.AddDbContextPool<YourDbContext>(o=>o.UseSqlServer(Your Connection String));
as you might see these two are in terms of writing have similarities, but in fact they have some fundamental differences in terms of concepts. @GabrielLuci has a nice response about the differences between these two: https://stackoverflow.com/a/48444206/1666800
Also note that you can store your connection string inside the appsettings.json file and simply read it using: Configuration.GetConnectionString("DefaultConnection") inside the ConfigureServices method in Startup.cs file.
A: AddDbContext implementation just registers the context itself and its common dependencies in DI.
Instead of AddDbContext call, it's perfectly legal to manually register your DbContext:
services.AddTransient<FooContext>();
Moreover, you could use a factory method to pass parameters (this is answering the question):
services.AddTransient<FooContext>(provider =>
{
//resolve another classes from DI
var anyOtherClass = provider.GetService<AnyOtherClass>();
//pass any parameters
return new FooContext(foo, bar);
});
P.S., In general, you don't have to register DbContextOptionsFactory and default DbContextOptions to resolve DbContext itself, but it could be necessary in specific cases.
A: Try this for inject your ef context - context inheritance from IDbContext
1-Add your context to service:
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<NopaDbContext>(
options => options
.UseLazyLoadingProxies()
.UseSqlServer(Configuration.GetConnectionString("NopaDbContext")),ServiceLifetime.Scoped);}
2-Inject your context:
private readonly IDbContext _context;
public EfRepository(NopaDbContext context)
{
this._context = context;
}
protected virtual DbSet<TEntity> Entities
{
get
{
if (_entities == null)
_entities = _context.Set<TEntity>();
return _entities;
}
}
A: You can put all your parameters of db context in a class AppDbContextParams and register a factory to create that object for appdbcontext:
services.AddScoped(sp =>
{
var currentUser = sp.GetService<IHttpContextAccessor>()?.HttpContext?.User?.Identity?.Name;
return new AppDbContextParams { GetCurrentUsernameCallback = () => currentUser ?? "n/a" };
});
A: EF Core 6 / .NET 6 has some changes to make it easier (and supported) to register DbContext and DbContextPool at the same time for different usages.
https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-6.0/whatsnew#dbcontext-factory-improvements
|
stackoverflow
|
{
"language": "en",
"length": 425,
"provenance": "stackexchange_0000F.jsonl.gz:840787",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466885"
}
|
80b6a97e117ce06bcf4b3fbfe5c53a6fcf0a3df6
|
Stackoverflow Stackexchange
Q: How do I make some text tappable (respond to taps) in Flutter? Assume I have some Flutter code like this:
// ...
body: new Center(
child: new Text(
'Hello, I am some text',
),
),
// ...
How can I make the Text on the screen respond to a tap? (For example, simply printing to the log when I tap the text.)
Thanks!
A: As seen on this answer, you can use an InkWell or a gesture detector.
For example
InkWell(
child: Text("Hello"),
onTap: () {print("value of your text");},
)
Or
var textValue = "Flutter"
InkWell(
child: Text(textValue),
onTap: () {print(textValue);},
)
EDIT : As Collin Jackson suggested, you can also use a FlatButton
FlatButton(
onPressed: () {print("Hello world");},
child: Text("Hello world"),
);
If you don't need or require material (FlatButton, InkWell, etc), you can use GestureDetector:
GestureDetector(
onTap: () { print("I was tapped!"); },
child: Text("Hello world"),
)
|
Q: How do I make some text tappable (respond to taps) in Flutter? Assume I have some Flutter code like this:
// ...
body: new Center(
child: new Text(
'Hello, I am some text',
),
),
// ...
How can I make the Text on the screen respond to a tap? (For example, simply printing to the log when I tap the text.)
Thanks!
A: As seen on this answer, you can use an InkWell or a gesture detector.
For example
InkWell(
child: Text("Hello"),
onTap: () {print("value of your text");},
)
Or
var textValue = "Flutter"
InkWell(
child: Text(textValue),
onTap: () {print(textValue);},
)
EDIT : As Collin Jackson suggested, you can also use a FlatButton
FlatButton(
onPressed: () {print("Hello world");},
child: Text("Hello world"),
);
If you don't need or require material (FlatButton, InkWell, etc), you can use GestureDetector:
GestureDetector(
onTap: () { print("I was tapped!"); },
child: Text("Hello world"),
)
A: You can also make the text appear in the form of a URL like so:
new FlatButton(
onPressed: () {print("You've tapped me!")},
child: Text(
"Tap me!",
style: TextStyle(color: Colors.blue, decoration: TextDecoration.underline)),
);
A: One more approach
Create the function that returns tapable widget:
Widget tapableText(String text, Function onTap) {
return GestureDetector(
onTap: onTap,
child: Text(text),
);
}
Usage example:
...
child: tapableText('Hello', () { print('I have been tapped :)'); }),
...
A: you can use Rich text for tappable text:-
Divide the text in Text Span accordingly and use Tap Gesture in that text you want to make tappable.
TapGestureRecognizer _termsConditionRecognizer;
TapGestureRecognizer _privacyPolicyRecognizer;
@override
void dispose() {
_privacyPolicy.dispose();
_termsCondition.dispose();
super.dispose();
}
@override
void initState() {
super.initState();
_termsConditionRecognizer = TapGestureRecognizer()
..onTap = () {
print("Terms and condition tapped");
};
_privacyPolicyRecognizer = TapGestureRecognizer()
..onTap = () {
print("Provacy Policy tapped");
};
}
@override
Widget build(BuildContext context) {
return Container(
child: Center(
child: RichText(
text: TextSpan(
text: 'By signing up you agree to the ',
children: [
TextSpan(
text: 'Terms And Condition',
recognizer: _termsConditionRecognizer,
),
TextSpan(
text: ' and ',
),
TextSpan(
text: 'Privacy Policy',
recognizer: _privacyPolicyRecognizer,
),
],
),
),
),
);
}
A: You can use these aproaches
*
*TextButton and provide your text to it.
*FlatButton and provide your text, to remove extra padding you can use
materialTapTargetSize property of FlatButton and provide
MaterialTapTargetSize.shrinkWrap but FlatButton is depreciated in newer version.
*InkWell as discussed above.
|
stackoverflow
|
{
"language": "en",
"length": 386,
"provenance": "stackexchange_0000F.jsonl.gz:840795",
"question_score": "27",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466908"
}
|
816fd4bbfff24d597620b5c8354ebdc71839f64f
|
Stackoverflow Stackexchange
Q: Change Windows command prompt to show only current folder Instead of showing
C:\Users\test_user\Documents\Folder\etc
show
\etc
or if possible limit it to a certain number
\Document\Folder\etc
A: If you check in help prompt /? there are two options that can either show the current drive or full path.
I would suggest to use new line option along with the Drive so that you will get more space to view/type the command using below combination.
prompt $P$_$G
With this you will be able to see the Path in the line above the prompt.
|
Q: Change Windows command prompt to show only current folder Instead of showing
C:\Users\test_user\Documents\Folder\etc
show
\etc
or if possible limit it to a certain number
\Document\Folder\etc
A: If you check in help prompt /? there are two options that can either show the current drive or full path.
I would suggest to use new line option along with the Drive so that you will get more space to view/type the command using below combination.
prompt $P$_$G
With this you will be able to see the Path in the line above the prompt.
A: In short, can't see a simple way of doing it.
In order to change the prompt options you can use the prompt command. The configuration you're looking for isn't listed.
The available options can be viewed by
prompt /? in the command window.
https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/prompt.mspx?mfr=true
A: The following is a simple batch script which can set the prompt to include only the current folder. Note that it does not work on directory names with certain characters such as parenthesis and spaces. I named it cdd.bat.
@echo off
cd %1
for %%i in (%CD%) do set NEWDIR=%%~ni
PROMPT %NEWDIR%$G
A: Like others pointed out, you can use the command - prompt to set the text that is shown in cmd.
While you cannot dynamically set the path to just the parent folder, you can manually set it using:
prompt {text}
So in your case, you can set it as:
prompt etc\$G
This will result in:
etc\>
$G adds an arrow sign. You can refer the documentation for detailed explanation.
A: Here is a .ps1 file i use to do this for myself.
<#
FileName: promptPsShort.ps1
To set the prompt to the last folder name in the path:
> function prompt {$l=Get-Location; $p="$l".split("\")[-1]; "PS $p> "}
# works at cmd prompt, BUT NOT DIREECTLY from a .ps1 file.
RESEARCH
1. google: powershell 7 copy text into clipboard
[How to copy text from PowerShell](https://superuser.com/q/302032/236556)
[Set-Clipboard](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.management/?view=powershell-7)
2. google: powershell escape double quote
[Escaping in PowerShell](http://www.rlmueller.net/PowerShellEscape.htm)
3. google: powershell raw string
[About Quoting Rules](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_quoting_rules?view=powershell-7)
4. Usage example: powershell
PS C:\flutter_beta\flutter\examples\catalog\android\app\src\main> pwd
Path
----
C:\flutter_beta\flutter\examples\catalog\android\app\src\main
PS C:\flutter_beta\flutter\examples\catalog\android\app\src\main> promptPsShort.ps1
Paste the current Clipboard contents into the Powershell Command Line and press Enter.
PS C:\flutter_beta\flutter\examples\catalog\android\app\src\main> function prompt {$l=Get-Location; $p="$l".split("\")[-1]; "PS $p> "}
PS main>
PS main>
PS main>
#>
$shortPromptCmdStr = @'
function prompt {$l=Get-Location; $p="$l".split("\")[-1]; "PS $p> "}
'@
Set-Clipboard -Value $shortPromptCmdStr
write-host "Paste the current Clipboard contents into the Powershell Command Line and press Enter."
Love and peace,
Joe
|
stackoverflow
|
{
"language": "en",
"length": 415,
"provenance": "stackexchange_0000F.jsonl.gz:840817",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466980"
}
|
77be5593af30deddbb0c55abb4b0bdf05f7f1924
|
Stackoverflow Stackexchange
Q: Python: how to save training datasets I have got training datasets, which are xtrain, ytrain, xtest and ytest. They are all numpy arrays. I want to save them together into a file, so that I can load them into workspace as done in keras for mnist.load_data:
(xtrain, ytrain), (xtest, ytest) = mnist.load_data(filepath)
In python, is there any way to save my training datasets into such a single file? Or is there any other appreciate methods to save them?
A: You have a number of options:
*
*npz
*hdf5
*pickle
Keras provides option to save models to hdf5. Also, note that out of the three, it's the only interoperable format.
|
Q: Python: how to save training datasets I have got training datasets, which are xtrain, ytrain, xtest and ytest. They are all numpy arrays. I want to save them together into a file, so that I can load them into workspace as done in keras for mnist.load_data:
(xtrain, ytrain), (xtest, ytest) = mnist.load_data(filepath)
In python, is there any way to save my training datasets into such a single file? Or is there any other appreciate methods to save them?
A: You have a number of options:
*
*npz
*hdf5
*pickle
Keras provides option to save models to hdf5. Also, note that out of the three, it's the only interoperable format.
A: Pickle is a good way to go:
import pickle as pkl
#to save it
with open("train.pkl", "w") as f:
pkl.dump([train_x, train_y], f)
#to load it
with open("train.pkl", "r") as f:
train_x, train_y = pkl.load(f)
If your dataset is huge, I would recommend check out hdf5 as @Lukasz Tracewski mentioned.
A: I find hickle is a very nice way to save them all together into a dict:
import hickle as hkl
data = {'xtrain': xtrain, 'xtest': xtest,'ytrain': ytrain,'ytest':ytest}
hkl.dump(data,'data.hkl')
A: You simply could use numpy.save
np.save('xtrain.npy', xtrain)
or in a human readable format
np.savetxt('xtrain.txt', xtrain)
|
stackoverflow
|
{
"language": "en",
"length": 205,
"provenance": "stackexchange_0000F.jsonl.gz:840819",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44466993"
}
|
ca8bcc139a1612cbd9af8d80b4d4215942959b01
|
Stackoverflow Stackexchange
Q: How do I find Bluetooth connections in Mac system logs I've lost my BT earphones, and am trying to determine the last time they were connected to my macbook, to narrow the time window and help the search.
How can I learn this from system logs?
I opened Console.app, and found numerous mentions of connections to BT devices, of the format:
com.apple.message.domain: com.apple.bluetooth.connect
com.apple.message.host: 05AC8290
com.apple.message.process: blued
com.apple.message.device: Unknown Name
com.apple.message.uuid: 0x001F
com.apple.message.direction: Outgoing
com.apple.message.rssi: 127
com.apple.message.pairing: LE
com.apple.message.rate: LE
com.apple.message.sco: LE
SenderMachUUID: 557AF7B3-7829-380F-83D7-684B2004E540
How do I determine which ones are connections to my BT earphones (not my smartphone)? I know the MAC addresses of both the devices that connect to this computer, but they don't seem to be mentioned in the logs.
A: Hold shift and option buttons at the same time and click on the Bluetooth icon:
Click on the Debug and then click on Enable Bluetooth logging. Then in the magnifier type "Console". In the search area, search for Bluetooth. You should be able to see the logs:
|
Q: How do I find Bluetooth connections in Mac system logs I've lost my BT earphones, and am trying to determine the last time they were connected to my macbook, to narrow the time window and help the search.
How can I learn this from system logs?
I opened Console.app, and found numerous mentions of connections to BT devices, of the format:
com.apple.message.domain: com.apple.bluetooth.connect
com.apple.message.host: 05AC8290
com.apple.message.process: blued
com.apple.message.device: Unknown Name
com.apple.message.uuid: 0x001F
com.apple.message.direction: Outgoing
com.apple.message.rssi: 127
com.apple.message.pairing: LE
com.apple.message.rate: LE
com.apple.message.sco: LE
SenderMachUUID: 557AF7B3-7829-380F-83D7-684B2004E540
How do I determine which ones are connections to my BT earphones (not my smartphone)? I know the MAC addresses of both the devices that connect to this computer, but they don't seem to be mentioned in the logs.
A: Hold shift and option buttons at the same time and click on the Bluetooth icon:
Click on the Debug and then click on Enable Bluetooth logging. Then in the magnifier type "Console". In the search area, search for Bluetooth. You should be able to see the logs:
A: In order to check Bluetooth logs open the Console.app e.g. by pressing cmd + space, then typing "Console" and hitting Enter.
Inside the Console.app you can filter messages just to Bluetooth related by typing "bluetooth" inside Search field.
Tested on macOS Catalina.
|
stackoverflow
|
{
"language": "en",
"length": 215,
"provenance": "stackexchange_0000F.jsonl.gz:840833",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467031"
}
|
bebefb0950a67047fe4f6d5a5d90359da62d5945
|
Stackoverflow Stackexchange
Q: Force software keyboard on iOS 10 Is there any way anyone knows of to force the onscreen software keyboard in iOS when a Bluetooth HID device (like a barcode scanner) is active?
There are a few ancient questions on SO, but most work by manually adjusting the frame of the keyboard view, and that method no longer appears to work as of iOS 8.
Strangely there doesn't seem to be any information on how to do this post iOS 8. Is it just impossible?
See:
Show iPhone soft keyboard even though a hardware keyboard is connected
Force on screen keyboard to show when bluetooth keyboard connected
Default keyboard is not coming when Barcode Scanner Device is Connected by Bluetooth in IOS
Bluetooth HID Device & iOS textFields
Show virtual Keyboard when bluetooth keyboard connected?
|
Q: Force software keyboard on iOS 10 Is there any way anyone knows of to force the onscreen software keyboard in iOS when a Bluetooth HID device (like a barcode scanner) is active?
There are a few ancient questions on SO, but most work by manually adjusting the frame of the keyboard view, and that method no longer appears to work as of iOS 8.
Strangely there doesn't seem to be any information on how to do this post iOS 8. Is it just impossible?
See:
Show iPhone soft keyboard even though a hardware keyboard is connected
Force on screen keyboard to show when bluetooth keyboard connected
Default keyboard is not coming when Barcode Scanner Device is Connected by Bluetooth in IOS
Bluetooth HID Device & iOS textFields
Show virtual Keyboard when bluetooth keyboard connected?
|
stackoverflow
|
{
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:840850",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467082"
}
|
d6bf9acd700e9aaedf9e64afac95d605ef848033
|
Stackoverflow Stackexchange
Q: PostgreSQL update trigger Comparing Hstore values I am creating trigger in PostgresSQL. On update I would like to compare all of the values in a Hstore column and update changes in my mirror table. I managed to get names of my columns in variable k but I am not able to get values using it from NEW and OLD.
CREATE OR REPLACE FUNCTION function_replication() RETURNS TRIGGER AS
$BODY$
DECLARE
k text;
BEGIN
FOR k IN SELECT key FROM EACH(hstore(NEW)) LOOP
IF NEW.k != OLD.k THEN
EXECUTE 'UPDATE ' || TG_TABLE_NAME || '_2' || 'SET ' || k || '=' || new.k || ' WHERE ID=$1.ID;' USING OLD;
END IF;
END LOOP;
RETURN NEW;
END;
$BODY$
language plpgsql;
A: You should operate on hstore representations of the records new and old. Also, use the format() function for better control and readibility.
create or replace function function_replication()
returns trigger as
$body$
declare
newh hstore = hstore(new);
oldh hstore = hstore(old);
key text;
begin
foreach key in array akeys(newh) loop
if newh->key != oldh->key then
execute format(
'update %s_2 set %s = %L where id = %s',
tg_table_name, key, newh->key, oldh->'id');
end if;
end loop;
return new;
end;
$body$
language plpgsql;
|
Q: PostgreSQL update trigger Comparing Hstore values I am creating trigger in PostgresSQL. On update I would like to compare all of the values in a Hstore column and update changes in my mirror table. I managed to get names of my columns in variable k but I am not able to get values using it from NEW and OLD.
CREATE OR REPLACE FUNCTION function_replication() RETURNS TRIGGER AS
$BODY$
DECLARE
k text;
BEGIN
FOR k IN SELECT key FROM EACH(hstore(NEW)) LOOP
IF NEW.k != OLD.k THEN
EXECUTE 'UPDATE ' || TG_TABLE_NAME || '_2' || 'SET ' || k || '=' || new.k || ' WHERE ID=$1.ID;' USING OLD;
END IF;
END LOOP;
RETURN NEW;
END;
$BODY$
language plpgsql;
A: You should operate on hstore representations of the records new and old. Also, use the format() function for better control and readibility.
create or replace function function_replication()
returns trigger as
$body$
declare
newh hstore = hstore(new);
oldh hstore = hstore(old);
key text;
begin
foreach key in array akeys(newh) loop
if newh->key != oldh->key then
execute format(
'update %s_2 set %s = %L where id = %s',
tg_table_name, key, newh->key, oldh->'id');
end if;
end loop;
return new;
end;
$body$
language plpgsql;
A: Another version - with minimalistic numbers of updates - in partially functional design (where it is possible).
This trigger should be AFTER trigger, to be ensured correct behave.
CREATE OR REPLACE FUNCTION function_replication()
RETURNS trigger AS $$
DECLARE
newh hstore;
oldh hstore;
update_vec text[];
pair text[];
BEGIN
IF new IS DISTINCT FROM old THEN
IF new.id <> old.id THEN
RAISE EXCEPTION 'id should be immutable';
END IF;
newh := hstore(new); oldh := hstore(old); update_vec := '{}';
FOREACH pair SLICE 1 IN ARRAY hstore_to_matrix(newh - oldh)
LOOP
update_vec := update_vec || format('%I = %L', pair[1], pair[2]);
END LOOP;
EXECUTE
format('UPDATE %I SET %s WHERE id = $1',
tg_table_name || '_2',
array_to_string(update_vec, ', '))
USING old.id;
END IF;
RETURN NEW; -- the value is not important in AFTER trg
END;
$$ LANGUAGE plpgsql;
CREATE TABLE foo(id int PRIMARY KEY, a int, b int);
CREATE TABLE foo_2(LIKE foo INCLUDING ALL);
CREATE TRIGGER xxx AFTER UPDATE ON foo
FOR EACH ROW EXECUTE PROCEDURE function_replication();
INSERT INTO foo VALUES(1, NULL, NULL);
INSERT INTO foo VALUES(2, 1,1);
INSERT INTO foo_2 VALUES(1, NULL, NULL);
INSERT INTO foo_2 VALUES(2, 1,1);
UPDATE foo SET a = 20, b = 30 WHERE id = 1;
UPDATE foo SET a = NULL WHERE id = 1;
This code is little bit more complex, but all what should be escaped is escaped and reduce number of executed UPDATE commands. UPDATE is full SQL command and the overhead of full SQL commands should be significantly higher than code that reduce number of full SQL commands.
|
stackoverflow
|
{
"language": "en",
"length": 452,
"provenance": "stackexchange_0000F.jsonl.gz:840906",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467257"
}
|
d74eb3527f6b0a05219ebcd680312a8e2fe86eaa
|
Stackoverflow Stackexchange
Q: TIMESTAMP format issue in HIVE I have Hive table created from JSON file.
CREATE external TABLE logan_test.t1 (
name string,
start_time timestamp
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH SERDEPROPERTIES (
"timestamp.formats" = "yyyy-MM-dd'T'HH:mm:ss.SSSSSS"
)
LOCATION 's3://t1/';
My timestamp data is in the format of yyyy-MM-dd'T'HH:mm:ss.SSSSSS.
I specified SERDEPROPERTIES for timestamp format as given in the page.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-TimestampstimestampTimestamps
Create statement executed successfully But select * failed with following error.
HIVE_BAD_DATA: Error parsing field value '2017-06-01T17:51:15.180400'
for field 1: Timestamp format must be yyyy-mm-dd hh:mm:ss[.fffffffff]
A: Jira HIVE-9298 in which timestamp.formats was introduced, says in the description that it is for LazySimpleSerDe. I did not find any other mention in the documentation that it was done for other SerDe.
The solution is to define timestamp as STRING and transform in the select.
Example for yyyy-MM-dd'T'HH:mm:ss.SSSSSS format:
select timestamp(regexp_replace(start_time, '^(.+?)T(.+?)','$1 $2'))
And this will work both for yyyy-MM-dd'T'HH:mm:ss.SSSSSS and yyyy-MM-dd HH:mm:ss.SSSSSS (normal timestamp) if there are both formats in data files.
timestamp(regexp_replace(start_time, '^(.+?)[T ](.+?)','$1 $2'))
Regex is powerful and you can parse different string formats using the same pattern.
|
Q: TIMESTAMP format issue in HIVE I have Hive table created from JSON file.
CREATE external TABLE logan_test.t1 (
name string,
start_time timestamp
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
WITH SERDEPROPERTIES (
"timestamp.formats" = "yyyy-MM-dd'T'HH:mm:ss.SSSSSS"
)
LOCATION 's3://t1/';
My timestamp data is in the format of yyyy-MM-dd'T'HH:mm:ss.SSSSSS.
I specified SERDEPROPERTIES for timestamp format as given in the page.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-TimestampstimestampTimestamps
Create statement executed successfully But select * failed with following error.
HIVE_BAD_DATA: Error parsing field value '2017-06-01T17:51:15.180400'
for field 1: Timestamp format must be yyyy-mm-dd hh:mm:ss[.fffffffff]
A: Jira HIVE-9298 in which timestamp.formats was introduced, says in the description that it is for LazySimpleSerDe. I did not find any other mention in the documentation that it was done for other SerDe.
The solution is to define timestamp as STRING and transform in the select.
Example for yyyy-MM-dd'T'HH:mm:ss.SSSSSS format:
select timestamp(regexp_replace(start_time, '^(.+?)T(.+?)','$1 $2'))
And this will work both for yyyy-MM-dd'T'HH:mm:ss.SSSSSS and yyyy-MM-dd HH:mm:ss.SSSSSS (normal timestamp) if there are both formats in data files.
timestamp(regexp_replace(start_time, '^(.+?)[T ](.+?)','$1 $2'))
Regex is powerful and you can parse different string formats using the same pattern.
|
stackoverflow
|
{
"language": "en",
"length": 177,
"provenance": "stackexchange_0000F.jsonl.gz:840908",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467264"
}
|
52b0fbc8d3ecca69bb506063aea8459e901c408f
|
Stackoverflow Stackexchange
Q: How can I download the chat history of a group in Telegram? I would like to download the chat history (all messages) that were posted in a public group on Telegram. How can I do this with python?
I've found this method in the API https://core.telegram.org/method/messages.getHistory which I think looks like what I'm trying to do. But how do I actually call it? It seems there's no python examples for the MTproto protocol they use.
I also looked at the Bot API, but it doesn't seem to have a method to download messages.
A: With an update (August 2018) now Telegram Desktop application supports saving chat history very conveniently.
You can store it as json or html formatted.
To use this feature, make sure you have the latest version of Telegram Desktop installed on your computer, then click Settings > Export Telegram data.
https://telegram.org/blog/export-and-more
|
Q: How can I download the chat history of a group in Telegram? I would like to download the chat history (all messages) that were posted in a public group on Telegram. How can I do this with python?
I've found this method in the API https://core.telegram.org/method/messages.getHistory which I think looks like what I'm trying to do. But how do I actually call it? It seems there's no python examples for the MTproto protocol they use.
I also looked at the Bot API, but it doesn't seem to have a method to download messages.
A: With an update (August 2018) now Telegram Desktop application supports saving chat history very conveniently.
You can store it as json or html formatted.
To use this feature, make sure you have the latest version of Telegram Desktop installed on your computer, then click Settings > Export Telegram data.
https://telegram.org/blog/export-and-more
A: The currently accepted answer is for very old versions of Telethon. With Telethon 1.0, the code can and should be simplified to the following:
# chat can be:
# * int id (-12345)
# * str username (@chat)
# * str phone number (+12 3456)
# * Peer (types.PeerChat(12345))
# * InputPeer (types.InputPeerChat(12345))
# * Chat object (types.Chat)
# * ...and many more types
chat = ...
api_id = ...
api_hash = ...
from telethon.sync import TelegramClient
client = TelegramClient('session_id', api_id, api_hash)
with client:
# 10 is the limit on how many messages to fetch. Remove or change for more.
for msg in client.iter_messages(chat, 10):
print(msg.sender.first_name, ':', msg.text)
Applying any formatting is still possible but hasattr is no longer needed. if msg.media for example would be enough to check if the message has media.
A note, if you're using Jupyter, you need to use async directly:
from telethon import TelegramClient
client = TelegramClient('session_id', api_id, api_hash)
# Note `async with` and `async for`
async with client:
async for msg in client.iter_messages(chat, 10):
print(msg.sender.first_name, ':', msg.text)
A: Now, you can use TDesktop to export chats.
Here is the blog post about Aug 2018 update.
Original Answer:
Telegram MTProto is hard to use to newbies, so I recommend telegram-cli.
You can use third-party tg-export script, but still not easy to newbies too.
A: You can use Telethon. Telegram API is fairly complicated and with the telethon, you can start using telegram API in a very short time without any pre-knowledge about the API.
pip install telethon
Then register your app (taken from telethon):
the link is: https://my.telegram.org/
Then to obtain message history of a group (assuming you have the group id):
chat_id = YOUR_CHAT_ID
api_id=YOUR_API_ID
api_hash = 'YOUR_API_HASH'
from telethon import TelegramClient
from telethon.tl.types.input_peer_chat import InputPeerChat
client = TelegramClient('session_id', api_id=api_id, api_hash=api_hash)
client.connect()
chat = InputPeerChat(chat_id)
total_count, messages, senders = client.get_message_history(
chat, limit=10)
for msg in reversed(messages):
# Format the message content
if getattr(msg, 'media', None):
content = '<{}> {}'.format( # The media may or may not have a caption
msg.media.__class__.__name__,
getattr(msg.media, 'caption', ''))
elif hasattr(msg, 'message'):
content = msg.message
elif hasattr(msg, 'action'):
content = str(msg.action)
else:
# Unknown message, simply print its class name
content = msg.__class__.__name__
text = '[{}:{}] (ID={}) {}: {} type: {}'.format(
msg.date.hour, msg.date.minute, msg.id, "no name",
content)
print (text)
The example is taken and simplified from telethon example.
A: You can use the Telethon library. for this you need to register your app and connect your client code to it (look at this).
Then to obtain message history of a entry (such as channel, group or chat):
from telethon.sync import TelegramClient
from telethon.errors import SessionPasswordNeededError
client = TelegramClient(username, api_id, api_hash, proxy=("socks5", proxy_ip, proxy_port)) # if in your country telegram is banned, you can use the proxy, otherwise remove it.
client.start()
# for login
if not client.is_user_authorized():
client.send_code_request(phone)
try:
client.sign_in(phone, input('Enter the code: '))
except SessionPasswordNeededError:
client.sign_in(password=input('Password: '))
async for message in client.iter_messages(chat_id, wait_time=0):
messages.append(Message(message))
# write your code
|
stackoverflow
|
{
"language": "en",
"length": 635,
"provenance": "stackexchange_0000F.jsonl.gz:840917",
"question_score": "21",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467293"
}
|
98eb4a41bd61a20306180dd2a1f8838c82f1cab7
|
Stackoverflow Stackexchange
Q: Mongoose - remove multiple documents in one function call In documentation there's deleteMany() method
Character.deleteMany({ name: /Stark/, age: { $gte: 18 } }, function (err) {});
I want to remove multiple documents that have one common property and the other property varies. Something like this:
Site.deleteMany({ userUID: uid, id: [10, 2, 3, 5]}, function(err) {}
What would be the correct syntax for this?
A: I believe what youre looking for is the $in operator:
Site.deleteMany({ userUID: uid, id: { $in: [10, 2, 3, 5]}}, function(err) {})
Documentation here: https://docs.mongodb.com/manual/reference/operator/query/in/
|
Q: Mongoose - remove multiple documents in one function call In documentation there's deleteMany() method
Character.deleteMany({ name: /Stark/, age: { $gte: 18 } }, function (err) {});
I want to remove multiple documents that have one common property and the other property varies. Something like this:
Site.deleteMany({ userUID: uid, id: [10, 2, 3, 5]}, function(err) {}
What would be the correct syntax for this?
A: I believe what youre looking for is the $in operator:
Site.deleteMany({ userUID: uid, id: { $in: [10, 2, 3, 5]}}, function(err) {})
Documentation here: https://docs.mongodb.com/manual/reference/operator/query/in/
A: I had to change id to _id for it to work:
Site.deleteMany({ _id: [1, 2, 3] });
This happens if no id is defined and the default one is used instead:
"Mongoose assigns each of your schemas an _id field by default if one is not passed into the Schema constructor." mongoose docs
A: Yes, $in is a perfect solution :
Site.deleteMany({ userUID: uid, id: { $in: [10, 2, 3, 5] } }, function(err) {})
A: You can also use.
Site.remove({ userUID: uid, id: { $in: [10, 2, 3, 5]}}, function(err, response) {});
|
stackoverflow
|
{
"language": "en",
"length": 184,
"provenance": "stackexchange_0000F.jsonl.gz:840924",
"question_score": "43",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467318"
}
|
328be290a155b62705f099c8dfa6b3190c81129e
|
Stackoverflow Stackexchange
Q: Formatting JSON Google Vision OCR Language Hints JSON formatting is a weakness of mine, and I am running a script that is submitting json requests to google vision API for OCR on images. The results are poor, so I think I may need to add Language Hints. Here is the basic json call:
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://YOUR_BUCKET_NAME/YOUR_FILE_NAME"
}
},
"features": [
{
"type": "TEXT_DETECTION"
}
]
}
]
}
Here is the page showing Language Hints. How can i add it to the json code in a valid way. I keep getting syntax errors!!
A: In case there are others who stumble on this question. This is the request for English and Chinese.
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://YOUR_BUCKET_NAME/YOUR_FILE_NAME"
}
},
"features": [
{
"type": "TEXT_DETECTION"
}
],
"imageContext": {
"languageHints": [
"en", "zh"
]
}
}
]
}
Languages can be found here: https://cloud.google.com/vision/docs/languages.
|
Q: Formatting JSON Google Vision OCR Language Hints JSON formatting is a weakness of mine, and I am running a script that is submitting json requests to google vision API for OCR on images. The results are poor, so I think I may need to add Language Hints. Here is the basic json call:
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://YOUR_BUCKET_NAME/YOUR_FILE_NAME"
}
},
"features": [
{
"type": "TEXT_DETECTION"
}
]
}
]
}
Here is the page showing Language Hints. How can i add it to the json code in a valid way. I keep getting syntax errors!!
A: In case there are others who stumble on this question. This is the request for English and Chinese.
{
"requests": [
{
"image": {
"source": {
"gcsImageUri": "gs://YOUR_BUCKET_NAME/YOUR_FILE_NAME"
}
},
"features": [
{
"type": "TEXT_DETECTION"
}
],
"imageContext": {
"languageHints": [
"en", "zh"
]
}
}
]
}
Languages can be found here: https://cloud.google.com/vision/docs/languages.
|
stackoverflow
|
{
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:840932",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467350"
}
|
5f52c47a4bdf0fe3eaec6f28d040ad332ec22ac2
|
Stackoverflow Stackexchange
Q: Setting up CloudWatch dimensions for APIGateway methods in CloudFormation I have an api say apifortest which has at 10 methods under different paths. Those methods are GET, PUT and POST. What I want to do is create a CloudWatch monitor for these.
I was looking at documentation here
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/api-gateway-metrics-dimensions.html
This is what I had earlier
TestApiCloudWatch:
Type: "AWS::CloudWatch::Alarm"
Properties:
ActionsEnabled: "True"
AlarmName: "ApiGateway-TestAPI-5XXError-SEV2"
ComparisonOperator: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, ComparisonOperator]
Dimensions:
-
Name: "ApiName"
Value: "APIForTest"
-
Name: "Stage"
Value: "Prod"
EvaluationPeriods: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, EvaluationPeriods]
MetricName: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, MetricName]
Namespace: "AWS/ApiGateway"
Period: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, Period]
Statistic: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, Statistic]
Threshold: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, Threshold]
But this alarm is being set At API Level. I want to setup at method level. The documetion does states that we can do so, but it doesn't have any example.
Any help would be appreciated.
A: The documentation lists the dimensions you need to use:
*
*API Name - the name of the API. You already have this.
*Stage - the name of the stage of the API. You already have this.
*Method - The HTTP method (e.g. GET, PUT, DELETE)
*Resource - The resource path (e.g. /foo/bar)
|
Q: Setting up CloudWatch dimensions for APIGateway methods in CloudFormation I have an api say apifortest which has at 10 methods under different paths. Those methods are GET, PUT and POST. What I want to do is create a CloudWatch monitor for these.
I was looking at documentation here
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/api-gateway-metrics-dimensions.html
This is what I had earlier
TestApiCloudWatch:
Type: "AWS::CloudWatch::Alarm"
Properties:
ActionsEnabled: "True"
AlarmName: "ApiGateway-TestAPI-5XXError-SEV2"
ComparisonOperator: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, ComparisonOperator]
Dimensions:
-
Name: "ApiName"
Value: "APIForTest"
-
Name: "Stage"
Value: "Prod"
EvaluationPeriods: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, EvaluationPeriods]
MetricName: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, MetricName]
Namespace: "AWS/ApiGateway"
Period: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, Period]
Statistic: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, Statistic]
Threshold: !FindInMap [APIGatewayCloudWatchMappings, 5XXError-SEV2, Threshold]
But this alarm is being set At API Level. I want to setup at method level. The documetion does states that we can do so, but it doesn't have any example.
Any help would be appreciated.
A: The documentation lists the dimensions you need to use:
*
*API Name - the name of the API. You already have this.
*Stage - the name of the stage of the API. You already have this.
*Method - The HTTP method (e.g. GET, PUT, DELETE)
*Resource - The resource path (e.g. /foo/bar)
|
stackoverflow
|
{
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:840961",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467446"
}
|
4d56acaf4b5d69a1efda1c5b196dbc756de61105
|
Stackoverflow Stackexchange
Q: CMake does not build an executable with add_executable I am new to CMake and I have problem creating an executable using CMake. I am trying to build an executable and a shared library from a single CMakeLists.txt file. My CMakeLists.txt is as follows:
cmake_minimum_required(VERSION 3.4.1)
project (TestService)
include_directories(
src/main/cpp/
libs/zlib/include/
)
add_library(libz SHARED IMPORTED)
set_target_properties(libz PROPERTIES IMPORTED_LOCATION ${PROJECT_SOURCE_DIR}/libs/zlib/libs/${ANDROID_ABI}/libz.so)
find_library(log-lib log)
add_executable(
test_utility
src/main/cpp/test_utility.cpp
src/main/cpp/storage.cpp
)
target_link_libraries(test_utility ${log-lib} libz)
add_library(
processor
SHARED
src/main/cpp/com_example_testservice.cpp
src/main/cpp/storage.cpp
)
target_link_libraries(processor libz ${log-lib})
However when I build my project using android studio/gradlew from command line, I only see the processor.so library getting created, test_utility executable is never created. What is incorrect in my CMakeLists.txt?
A: The answer is: it builds, it's just not packaged into apk because only files matching pattern lib*.so will be copied. Therefore the fix is easy:
add_executable(libnativebinaryname.so ...)
|
Q: CMake does not build an executable with add_executable I am new to CMake and I have problem creating an executable using CMake. I am trying to build an executable and a shared library from a single CMakeLists.txt file. My CMakeLists.txt is as follows:
cmake_minimum_required(VERSION 3.4.1)
project (TestService)
include_directories(
src/main/cpp/
libs/zlib/include/
)
add_library(libz SHARED IMPORTED)
set_target_properties(libz PROPERTIES IMPORTED_LOCATION ${PROJECT_SOURCE_DIR}/libs/zlib/libs/${ANDROID_ABI}/libz.so)
find_library(log-lib log)
add_executable(
test_utility
src/main/cpp/test_utility.cpp
src/main/cpp/storage.cpp
)
target_link_libraries(test_utility ${log-lib} libz)
add_library(
processor
SHARED
src/main/cpp/com_example_testservice.cpp
src/main/cpp/storage.cpp
)
target_link_libraries(processor libz ${log-lib})
However when I build my project using android studio/gradlew from command line, I only see the processor.so library getting created, test_utility executable is never created. What is incorrect in my CMakeLists.txt?
A: The answer is: it builds, it's just not packaged into apk because only files matching pattern lib*.so will be copied. Therefore the fix is easy:
add_executable(libnativebinaryname.so ...)
A: It's hard to say what's happening under the hood without seeing the actual command.
That being said, probably you are using make processor that explicitly builds processor target only. From your CMakeLists.txt you can see that processor target has not test_utility target as a dependency.
To compile the latter you can:
*
*either use make, to make all the targets
*or run make test_utility, to build it explicitly
A: You need to specify your executable as a build target. Android Studio builds .so files by default, but will not build executables unless you specify them. Here's the documentation on the topic (search for "targets").
Basically, add something like this to your module's build.gradle file:
defaultConfig {
externalNativeBuild {
cmake {
targets "executable_target"
}
}
}
You can also place it under a product flavor like this:
productFlavors {
chocolate {
externalNativeBuild {
cmake {
targets "executable_target"
}
}
}
}
If you add any explicit build target, it will no longer build all shared objects by default, only those that are dependents of the explicit target(s). You can specify more than one target to build all your executables and shared objects. This bug covers improving that.
|
stackoverflow
|
{
"language": "en",
"length": 334,
"provenance": "stackexchange_0000F.jsonl.gz:840980",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467516"
}
|
fdebe54fbacc4293b262938d799ff0109767bb24
|
Stackoverflow Stackexchange
Q: Better way to disable console inside unit tests I wonder if there is a better way to disable console errors inside a specific Jest test (i.e. restore the original console before/after each test).
Here is my current approach:
describe("Some description", () => {
let consoleSpy;
beforeEach(() => {
if (typeof consoleSpy === "function") {
consoleSpy.mockRestore();
}
});
test("Some test that should not output errors to jest console", () => {
expect.assertions(2);
consoleSpy = jest.spyOn(console, "error").mockImplementation();
// some function that uses console error
expect(someFunction).toBe("X");
expect(consoleSpy).toHaveBeenCalled();
});
test("Test that has console available", () => {
// shows up during jest watch test, just as intended
console.error("test");
});
});
Is there a cleaner way of accomplishing the same thing? I would like to avoid spyOn, but mockRestore only seems to work with it.
A: As every test file runs in its own thread there is no need to restore it if you want to disable it for all test in one file. For the same reason you can also just write
console.log = jest.fn()
expect(console.log).toHaveBeenCalled();
|
Q: Better way to disable console inside unit tests I wonder if there is a better way to disable console errors inside a specific Jest test (i.e. restore the original console before/after each test).
Here is my current approach:
describe("Some description", () => {
let consoleSpy;
beforeEach(() => {
if (typeof consoleSpy === "function") {
consoleSpy.mockRestore();
}
});
test("Some test that should not output errors to jest console", () => {
expect.assertions(2);
consoleSpy = jest.spyOn(console, "error").mockImplementation();
// some function that uses console error
expect(someFunction).toBe("X");
expect(consoleSpy).toHaveBeenCalled();
});
test("Test that has console available", () => {
// shows up during jest watch test, just as intended
console.error("test");
});
});
Is there a cleaner way of accomplishing the same thing? I would like to avoid spyOn, but mockRestore only seems to work with it.
A: As every test file runs in its own thread there is no need to restore it if you want to disable it for all test in one file. For the same reason you can also just write
console.log = jest.fn()
expect(console.log).toHaveBeenCalled();
A: beforeAll(() => {
jest.spyOn(console, 'log').mockImplementation(() => {});
jest.spyOn(console, 'error').mockImplementation(() => {});
jest.spyOn(console, 'warn').mockImplementation(() => {});
jest.spyOn(console, 'info').mockImplementation(() => {});
jest.spyOn(console, 'debug').mockImplementation(() => {});
});
A: Here's all the lines you may want to use. You can put them right in the test:
jest.spyOn(console, 'warn').mockImplementation(() => {});
console.warn("You won't see me!")
expect(console.warn).toHaveBeenCalled();
console.warn.mockRestore();
A: Weirdly the answers above (except Raja's great answer but I wanted to share the weird way the others fail and how to clear the mock so no one else wastes the time I did) seem to successfully create the mock but don't suppress the logging to the console.
Both
const consoleSpy = jest.spyOn(console, 'warn').mockImplementation(() => {});
and
global console = {
warn: jest.fn().mockImplementation(() => {});
}
successfully install the mock (I can use expect(console.warn).toBeCalledTimes(1) and it passes) but it still outputs the warning even though the mock implementation seemingly should be replacing the default (this is in a jsdom environment).
Eventually I found a hack to fix the problem and put the following in the file loaded with SetupFiles in your config (note that I found sometimes global.$ didn't work for me when putting jquery into global context so I just set all my globals this way in my setup).
const consoleWarn = jest.spyOn(console, 'warn').mockImplementation(() => {});
const consoleLog = jest.spyOn(console, 'log').mockImplementation(() => {});
const consoleDebug = jest.spyOn(console, 'debug').mockImplementation(() => {});
const consoleError = jest.spyOn(console, 'error').mockImplementation(() => {});
Object.defineProperty(global, 'console', {value: {
warn: consoleWarn,
log: consoleLog,
debug: consoleDebug,
error: consoleError}});
It feels ugly and I then have to put code like the following in each test file since beforeEach isn't defined in the files referenced by SetupFiles (maybe you could put both in SetupFilesAfterEnv but I haven't tried).
beforeEach(() => {
console.warn.mockClear();
});
A: For particular spec file, Andreas's is good enough. Below setup will suppress console.log statements for all test suites,
jest --silent
(or)
To customize warn, info and debug you can use below setup
tests/setup.js or jest-preload.js configured in setupFilesAfterEnv
global.console = {
...console,
// uncomment to ignore a specific log level
log: jest.fn(),
debug: jest.fn(),
info: jest.fn(),
// warn: jest.fn(),
// error: jest.fn(),
};
jest.config.js
module.exports = {
verbose: true,
setupFilesAfterEnv: ["<rootDir>/__tests__/setup.js"],
};
A: I found that the answer above re: suppressing console.log across all test suites threw errors when any other console methods (e.g. warn, error) were called since it was replacing the entire global console object.
This somewhat similar approach worked for me with Jest 22+:
package.json
"jest": {
"setupFiles": [...],
"setupTestFrameworkScriptFile": "<rootDir>/jest/setup.js",
...
}
jest/setup.js
jest.spyOn(global.console, 'log').mockImplementation(() => jest.fn());
Using this method, only console.log is mocked and other console methods are unaffected.
A: Since jest.spyOn doesn't work for this (it may have in the past), I resorted to jest.fn with a manual mock restoration as pointed out in Jest docs. This way, you should not miss any logs which are not empirically ignored in a specific test.
const consoleError = console.error
beforeEach(() => {
console.error = consoleError
})
test('with error', () => {
console.error = jest.fn()
console.error('error') // can't see me
})
test('with error and log', () => {
console.error('error') // now you can
})
A: If you are using command npm test to run test then change the test script in package.json like below
{
....
"name": "....",
"version": "0.0.1",
"private": true,
"scripts": {
"android": "react-native run-android",
"ios": "react-native run-ios",
"start": "react-native start",
"test": "jest --silent", // add --silent to jest in script like this
"lint": "eslint ."
},
...
}
Or else you can directly run command npx jest --silent to get rid of all logs and errors when testing
A: If you want to do it just for a specific test:
beforeEach(() => {
jest.spyOn(console, 'warn').mockImplementation(() => {});
});
A: To me a more clear/clean way (reader needs little knowledge of the jest API to understand what is happening), is to just manually do what mockRestore does:
// at start of test you want to suppress
const consoleLog = console.log;
console.log = jest.fn();
// at end of test
console.log = consoleLog;
A: Kudos to @Raja's top answer. Here is what I am using (I would comment, but can't share a multi-line code block in a comment).
With jest v26, I'm getting this error:
We detected setupFilesAfterEnv in your package.json.
Remove it from Jest configuration, and put the initialization code in src/setupTests.js:
This file will be loaded automatically.
Therefore, I had to remove the setupFilesAfterEnv from my jest config, and add this to src/setupTests.js
// https://stackoverflow.com/questions/44467657/jest-better-way-to-disable-console-inside-unit-tests
const nativeConsoleError = global.console.error
global.console.error = (...args) => {
if (args.join('').includes('Could not parse CSS stylesheet')) {
return
}
return nativeConsoleError(...args)
}
A: Another approach is to use process.env.NODE_ENV. This way one can selectively choose what to show (or not) while running tests:
if (process.env.NODE_ENV === 'development') {
console.log('Show output only while in "development" mode');
} else if (process.env.NODE_ENV === 'test') {
console.log('Show output only while in "test" mode');
}
or
const logDev = msg => {
if (process.env.NODE_ENV === 'development') {
console.log(msg);
}
}
logDev('Show output only while in "development" mode');
This will require this configuration to be placed on package.json:
"jest": {
"globals": {
"NODE_ENV": "test"
}
}
Note that this approach is not a direct solution to the original question, but gives the expected result as long as one has the possibility to wrap the console.log with the mentioned condition.
|
stackoverflow
|
{
"language": "en",
"length": 1053,
"provenance": "stackexchange_0000F.jsonl.gz:841020",
"question_score": "216",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467657"
}
|
2b384c02ba22f6f10fe5cb89a4152de4445d6bbe
|
Stackoverflow Stackexchange
Q: Creating a page for date entry for Roku? I'm working on a Roku app and we want the user's date of birth. Trying not to get too complex on the parsing end (so would prefer to not just have a text box where the user can enter whatever they want). I looked into using a roPinEntryDialog, but that unfortunately I think is only meant for entering payment information. I see that roDateTime is a thing, but that seems to only get the current date, and not have any types of inputs for it.
Any ideas or help?
Thanks!
A: What you can do depends if you are writing the app SDK1 (the older but simple components that are being deprecated now) or RSG (Roku Scene Graph - the newer, way more complex way of implementing) style.
If using RSG, i would think LabelList is a good start to implement something akin to iOS's UIDatePicker. E.g. with remote Up-Down user selects month, then press Right to move to the day column, then Right onto the year list.
|
Q: Creating a page for date entry for Roku? I'm working on a Roku app and we want the user's date of birth. Trying not to get too complex on the parsing end (so would prefer to not just have a text box where the user can enter whatever they want). I looked into using a roPinEntryDialog, but that unfortunately I think is only meant for entering payment information. I see that roDateTime is a thing, but that seems to only get the current date, and not have any types of inputs for it.
Any ideas or help?
Thanks!
A: What you can do depends if you are writing the app SDK1 (the older but simple components that are being deprecated now) or RSG (Roku Scene Graph - the newer, way more complex way of implementing) style.
If using RSG, i would think LabelList is a good start to implement something akin to iOS's UIDatePicker. E.g. with remote Up-Down user selects month, then press Right to move to the day column, then Right onto the year list.
A: The solution I ended up using was to use a regular text keyboard, and validate the input with regex:
getText: function(ageValidate as Boolean, defaultText as String, displayText as String) as String
screen = CreateObject("roKeyboardScreen")
port = CreateObject("roMessagePort")
screen.SetMessagePort(port)
screen.SetDisplayText(displayText)
screen.SetText(defaultText)
screen.SetMaxLength(100)
screen.AddButton(1, "done")
screen.AddButton(2, "back")
screen.Show()
while true
msg = wait(0, screen.GetMessagePort())
if type(msg) = "roKeyboardScreenEvent"
if msg.isScreenClosed()
return ""
else if msg.isButtonPressed() then
if ageValidate = true AND m.isValidDate(text) = false then
"Invalid Input", "Input your birthdate in the format MMDDYYYY", "okay")
else if text = invalid OR text = ""
showUserDialog("Error"), "no input", "okay")
else
screen.Close()
return text
end if
else if msg.GetIndex() = 2
screen.Close()
return ""
end if
end if
end if
end while
end function
isValidDate: function(date as String) as Boolean
return CreateObject("roRegex", "(0[1-9]|1[012])[-.]?(0[1-9]|[12][0-9]|3[01])[-.]?(19|20)[0-9]{2}", "i").IsMatch(date)
end function
|
stackoverflow
|
{
"language": "en",
"length": 310,
"provenance": "stackexchange_0000F.jsonl.gz:841039",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467750"
}
|
7d475f3727475838aa09e837f3fe2ab8dca2a40d
|
Stackoverflow Stackexchange
Q: Directly inheriting from ActiveRecord::Migration is not supported I just finished my first Ruby on Rails app, and I'm trying to deploy it to Heroku. I am now at the final step, but when I run the following command (heroku run rake db:migrate), I am getting this error :
StandardError : Directly inheriting from ActiveRecord::Migration is not supported.
Please specify the Rails release the migration was written for.
Everyone on the web is saying that you just have to change
class CreateRelationships < ActiveRecord::Migration
to
class CreateRelationships < ActiveRecord::Migration[4.2]
The problem is that this solution doesn't work for me. Thank you in advance!
A: Add [5.1] if your ruby rails version is [5.1.5] even...just 5.1
class CreateRelationships < ActiveRecord::Migration[5.1]
like in this thread. Check in gemfile at the very top to see what version of rails you have.
I ran bundle install straight afters and
and I ran the command that originally showed this error and it worked then worked. Not sure you need to run bundle install though..
Hope this helps
Sput
|
Q: Directly inheriting from ActiveRecord::Migration is not supported I just finished my first Ruby on Rails app, and I'm trying to deploy it to Heroku. I am now at the final step, but when I run the following command (heroku run rake db:migrate), I am getting this error :
StandardError : Directly inheriting from ActiveRecord::Migration is not supported.
Please specify the Rails release the migration was written for.
Everyone on the web is saying that you just have to change
class CreateRelationships < ActiveRecord::Migration
to
class CreateRelationships < ActiveRecord::Migration[4.2]
The problem is that this solution doesn't work for me. Thank you in advance!
A: Add [5.1] if your ruby rails version is [5.1.5] even...just 5.1
class CreateRelationships < ActiveRecord::Migration[5.1]
like in this thread. Check in gemfile at the very top to see what version of rails you have.
I ran bundle install straight afters and
and I ran the command that originally showed this error and it worked then worked. Not sure you need to run bundle install though..
Hope this helps
Sput
|
stackoverflow
|
{
"language": "en",
"length": 173,
"provenance": "stackexchange_0000F.jsonl.gz:841048",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467768"
}
|
f254321fde03a92162ec601779c0805a4894d79e
|
Stackoverflow Stackexchange
Q: json schema to pojo generator what is the best json to pojo generator which have oneof/allof/anyof features . We are currently using a custom one which doesn't support the latest additions in the json. I have tried some of them that shows in the google search but didn't work.
A: I use https://github.com/java-json-tools/json-schema-validator for schema validation and jackson for pojo-generator.
However, I did not find any support for allOf/AnyOf/oneOf explicitly in jackson. But jackson has a rich set of annotation and it can be built using those.
you can refer a discussion https://github.com/joelittlejohn/jsonschema2pojo/issues/392 to see if something helpful is there for you.
|
Q: json schema to pojo generator what is the best json to pojo generator which have oneof/allof/anyof features . We are currently using a custom one which doesn't support the latest additions in the json. I have tried some of them that shows in the google search but didn't work.
A: I use https://github.com/java-json-tools/json-schema-validator for schema validation and jackson for pojo-generator.
However, I did not find any support for allOf/AnyOf/oneOf explicitly in jackson. But jackson has a rich set of annotation and it can be built using those.
you can refer a discussion https://github.com/joelittlejohn/jsonschema2pojo/issues/392 to see if something helpful is there for you.
|
stackoverflow
|
{
"language": "en",
"length": 103,
"provenance": "stackexchange_0000F.jsonl.gz:841067",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467816"
}
|
b67046ab0bf6e622f4fc529133ee8880bfc7fed9
|
Stackoverflow Stackexchange
Q: JavaFX8 fxml naming of nested controllers Given an .fxml include like:
<fx:include fx:id="header" source="Header.fxml" />
The Java FXML docs say to create two variables like:
@FXML private HBox header;
@FXML private HeaderController headerController;
What determines the controller variable name? Is it always just the include id followed by "Controller"?
A: Yes the field name the controller is injected to is always constructed by concatenating the fx:id of the <fx:include> tag with "Controller".
It's "hidden" in the documentation of the FXMLLoader.CONTROLLER_SUFFIX field.
A suffix for controllers of included fxml files. The full key is stored in namespace map.
(The namespace map contains all the objects by the field name they are injected to, if such a field exists.)
You can verify that it's value is "Controller" here: https://docs.oracle.com/javase/8/javafx/api/constant-values.html#javafx.fxml.FXMLLoader.CONTROLLER_SUFFIX
|
Q: JavaFX8 fxml naming of nested controllers Given an .fxml include like:
<fx:include fx:id="header" source="Header.fxml" />
The Java FXML docs say to create two variables like:
@FXML private HBox header;
@FXML private HeaderController headerController;
What determines the controller variable name? Is it always just the include id followed by "Controller"?
A: Yes the field name the controller is injected to is always constructed by concatenating the fx:id of the <fx:include> tag with "Controller".
It's "hidden" in the documentation of the FXMLLoader.CONTROLLER_SUFFIX field.
A suffix for controllers of included fxml files. The full key is stored in namespace map.
(The namespace map contains all the objects by the field name they are injected to, if such a field exists.)
You can verify that it's value is "Controller" here: https://docs.oracle.com/javase/8/javafx/api/constant-values.html#javafx.fxml.FXMLLoader.CONTROLLER_SUFFIX
|
stackoverflow
|
{
"language": "en",
"length": 128,
"provenance": "stackexchange_0000F.jsonl.gz:841129",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44467982"
}
|
ca6313db9384d89480650c7e619e6bf780b50527
|
Stackoverflow Stackexchange
Q: How to retrieve http get response as a complete json String in groovy using httpbuilder I want to use the response json of a GET request as an input to another request. For that the response that I receive should be in correct json format. I am using HttpBuilder to do this.
HTTPBuilder http = new HTTPBuilder(urlParam, ContentType.JSON);
http.headers.Accept = ContentType.JSON;
http.parser[ContentType.JSON] = http.parser.'application/json'
return http.request(GET) {
response.success = {resp, json ->
return json.toString()
}
When I return the json.toString() it is not a well formed json. How do I achieve that. When I click my get url i see entire json but not using the above code.Thanks for your help.
A: With groovy.json.JsonOutput:
HTTPBuilder http = new HTTPBuilder('http://date.jsontest.com/', ContentType.JSON);
http.headers.Accept = ContentType.JSON
http.parser[ContentType.JSON] = http.parser.'application/json'
http.request(Method.GET) {
response.success = { resp, json ->
println json.toString() // Not valid JSON
println JsonOutput.toJson(json) // Valid JSON
println JsonOutput.prettyPrint(JsonOutput.toJson(json))
}
}
Result:
{time=09:41:21 PM, milliseconds_since_epoch=1497303681991, date=06-12-2017}
{"time":"09:41:21 PM","milliseconds_since_epoch":1497303681991,"date":"06-12-2017"}
{
"time": "09:41:21 PM",
"milliseconds_since_epoch": 1497303681991,
"date": "06-12-2017"
}
|
Q: How to retrieve http get response as a complete json String in groovy using httpbuilder I want to use the response json of a GET request as an input to another request. For that the response that I receive should be in correct json format. I am using HttpBuilder to do this.
HTTPBuilder http = new HTTPBuilder(urlParam, ContentType.JSON);
http.headers.Accept = ContentType.JSON;
http.parser[ContentType.JSON] = http.parser.'application/json'
return http.request(GET) {
response.success = {resp, json ->
return json.toString()
}
When I return the json.toString() it is not a well formed json. How do I achieve that. When I click my get url i see entire json but not using the above code.Thanks for your help.
A: With groovy.json.JsonOutput:
HTTPBuilder http = new HTTPBuilder('http://date.jsontest.com/', ContentType.JSON);
http.headers.Accept = ContentType.JSON
http.parser[ContentType.JSON] = http.parser.'application/json'
http.request(Method.GET) {
response.success = { resp, json ->
println json.toString() // Not valid JSON
println JsonOutput.toJson(json) // Valid JSON
println JsonOutput.prettyPrint(JsonOutput.toJson(json))
}
}
Result:
{time=09:41:21 PM, milliseconds_since_epoch=1497303681991, date=06-12-2017}
{"time":"09:41:21 PM","milliseconds_since_epoch":1497303681991,"date":"06-12-2017"}
{
"time": "09:41:21 PM",
"milliseconds_since_epoch": 1497303681991,
"date": "06-12-2017"
}
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:841135",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468009"
}
|
cb6fec1d325e94bf953b2b1bdd9a0dac6c74b34a
|
Stackoverflow Stackexchange
Q: Comparing three arrays & get the values which are only in array1 I want to find all elements of an array a1 which items are not a part of array a2 and array a3.
For example:
$a1 = @(1,2,3,4,5,6,7,8)
$a2 = @(1,2,3)
$a3 = @(4,5,6,7)
Expected result:
8
A: Try this:
$a2AndA3 = $a2 + $a3
$notInA2AndA3 = $a1 | Where-Object {!$a2AndA3.contains($_)}
As a one liner:
$notInA2AndA3 = $a1 | Where {!($a2 + $a3).contains($_)}
|
Q: Comparing three arrays & get the values which are only in array1 I want to find all elements of an array a1 which items are not a part of array a2 and array a3.
For example:
$a1 = @(1,2,3,4,5,6,7,8)
$a2 = @(1,2,3)
$a3 = @(4,5,6,7)
Expected result:
8
A: Try this:
$a2AndA3 = $a2 + $a3
$notInA2AndA3 = $a1 | Where-Object {!$a2AndA3.contains($_)}
As a one liner:
$notInA2AndA3 = $a1 | Where {!($a2 + $a3).contains($_)}
A: k7s5a's helpful answer is conceptually elegant and convenient, but there's a caveat:
It doesn't scale well, because an array lookup must be performed for each $a1 element.
At least for larger arrays, PowerShell's Compare-Object cmdlet is the better choice:
If the input arrays are ALREADY SORTED:
(Compare-Object $a1 ($a2 + $a3) | Where-Object SideIndicator -eq '<=').InputObject
Note:
* Compare-Object doesn't require sorted input, but it can greatly enhance performance - see below.
* As Esperento57 points out, (Compare-Object $a1 ($a2 + $a3)).InputObject is sufficient in the specific case at hand, but only because $a2 and $a3 happen not to contain elements that aren't also in $a1.
Therefore, the more general solution is to use filter Where-Object SideIndicator -eq '<=', because it limits the results to objects missing from the LHS ($a1), and not also vice versa.
If the input arrays are NOT SORTED:
Explicitly sorting the input arrays before comparing them greatly enhances performance:
(Compare-Object ($a1 | Sort-Object) ($a2 + $a3 | Sort-Object) |
Where-Object SideIndicator -eq '<=').InputObject
The following example, which uses a 10,000-element array, illustrates the difference in performance:
$count = 10000 # Adjust this number to test scaling.
$a1 = 0..$($count-1) # With 10,000: 0..9999
$a2 = 0..$($count/2) # With 10,000: 0..5000
$a3 = $($count/2+1)..($count-3) # With 10,000: 5001..9997
$(foreach ($pass in 1..2) {
if ($pass -eq 1 ) {
$passDescr = "SORTED input"
} else {
$passDescr = "UNSORTED input"
# Shuffle the arrays.
$a1 = $a1 | Get-Random -Count ([int]::MaxValue)
$a2 = $a2 | Get-Random -Count ([int]::MaxValue)
$a3 = $a3 | Get-Random -Count ([int]::MaxValue)
}
[pscustomobject] @{
TestCategory = $passDescr
Test = "CompareObject, explicitly sorted first"
Timing = (Measure-Command {
(Compare-Object ($a1 | Sort-Object) ($a2 + $a3 | Sort-Object) | Where-Object SideIndicator -eq '<=').InputObject |
Out-Host; '---' | Out-Host
}).TotalSeconds
},
[pscustomobject] @{
TestCategory = $passDescr
Test = "CompareObject"
Timing = (Measure-Command {
(Compare-Object $a1 ($a2 + $a3) | Where-Object SideIndicator -eq '<=').InputObject |
Out-Host; '---' | Out-Host
}).TotalSeconds
},
[pscustomobject] @{
TestCategory = $passDescr
Test = "!.Contains(), two-pass"
Timing = (Measure-Command {
$a2AndA3 = $a2 + $a3
$a1 | Where-Object { !$a2AndA3.Contains($_) } |
Out-Host; '---' | Out-Host
}).TotalSeconds
},
[pscustomobject] @{
TestCategory = $passDescr
Test = "!.Contains(), two-pass, explicitly sorted first"
Timing = (Measure-Command {
$a2AndA3 = $a2 + $a3 | Sort-Object
$a1 | Sort-Object | Where-Object { !$a2AndA3.Contains($_) } |
Out-Host; '---' | Out-Host
}).TotalSeconds
},
[pscustomobject] @{
TestCategory = $passDescr
Test = "!.Contains(), single-pass"
Timing = (Measure-Command {
$a1 | Where-Object { !($a2 + $a3).Contains($_) } |
Out-Host; '---' | Out-Host
}).TotalSeconds
},
[pscustomobject] @{
TestCategory = $passDescr
Test = "-notcontains, two-pass"
Timing = (Measure-Command {
$a2AndA3 = $a2 + $a3
$a1 | Where-Object { $a2AndA3 -notcontains $_ } |
Out-Host; '---' | Out-Host
}).TotalSeconds
},
[pscustomobject] @{
TestCategory = $passDescr
Test = "-notcontains, two-pass, explicitly sorted first"
Timing = (Measure-Command {
$a2AndA3 = $a2 + $a3 | Sort-Object
$a1 | Sort-Object | Where-Object { $a2AndA3 -notcontains $_ } |
Out-Host; '---' | Out-Host
}).TotalSeconds
},
[pscustomobject] @{
TestCategory = $passDescr
Test = "-notcontains, single-pass"
Timing = (Measure-Command {
$a1 | Where-Object { ($a2 + $a3) -notcontains $_ } |
Out-Host; '---' | Out-Host
}).TotalSeconds
}
}) |
Group-Object TestCategory | ForEach-Object {
"`n=========== $($_.Name)`n"
$_.Group | Sort-Object Timing | Select-Object Test, @{ l='Timing'; e={ '{0:N3}' -f $_.Timing } }
}
Sample output from my machine (output of missing array elements omitted):
=========== SORTED input
Test Timing
---- ------
CompareObject 0.068
CompareObject, explicitly sorted first 0.187
!.Contains(), two-pass 0.548
-notcontains, two-pass 6.186
-notcontains, two-pass, explicitly sorted first 6.972
!.Contains(), two-pass, explicitly sorted first 12.137
!.Contains(), single-pass 13.354
-notcontains, single-pass 18.379
=========== UNSORTED input
CompareObject, explicitly sorted first 0.198
CompareObject 6.617
-notcontains, two-pass 6.927
-notcontains, two-pass, explicitly sorted first 7.142
!.Contains(), two-pass 12.263
!.Contains(), two-pass, explicitly sorted first 12.641
-notcontains, single-pass 19.273
!.Contains(), single-pass 25.174
*
*While timings will vary based on many factors, you can get a sense that Compare-Object scales much better, if the input is either pre-sorted or sorted on demand, and the performance gap widens with increasing element count.
*When not using Compare-Object, performance can be somewhat increased - but not being able to take advantage of sorting is the fundamentally limiting factor:
*
*Neither -notcontains / -contains nor .Contains() can take full advantage of presorted input.
*If the input is already sorted: Using the .Contains() IList interface .NET method rather than the PowerShell -contains / -notcontains operators (which an earlier version of k7s5a's answer used) improves performance.
*Joining arrays $a2 and $a3 once, up front, and then using the joined array in the pipeline improves performance (that way, the arrays don't have to be joined in every iteration).
|
stackoverflow
|
{
"language": "en",
"length": 847,
"provenance": "stackexchange_0000F.jsonl.gz:841136",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468013"
}
|
6b655b3090727c0357ce39cc5326ba010e0d017d
|
Stackoverflow Stackexchange
Q: How can I detect a user recording my iOS app with the ReplayKit screen recording APIs? Apple posts a reliable notification for screenshot detection which I've been using, but I'd like to also detect if the user is recording my app with the new ReplayKit API. We can try to get a UIScreenDidConnectNotification or test the .mirroredScreen property to see if there's anything going on, but neither of these are reliable, despite Apple's old technote (https://developer.apple.com/library/content/qa/qa1738/_index.html) saying otherwise. We could look at the height of the status bar, but that has false positives.
Has anyone gotten something working for this?
A: Have you tried registering a RPScreenRecorderDelegate. There is a screenRecorderDidChangeAvailability callback.
https://developer.apple.com/documentation/replaykit/rpscreenrecorderdelegate?language=objc
|
Q: How can I detect a user recording my iOS app with the ReplayKit screen recording APIs? Apple posts a reliable notification for screenshot detection which I've been using, but I'd like to also detect if the user is recording my app with the new ReplayKit API. We can try to get a UIScreenDidConnectNotification or test the .mirroredScreen property to see if there's anything going on, but neither of these are reliable, despite Apple's old technote (https://developer.apple.com/library/content/qa/qa1738/_index.html) saying otherwise. We could look at the height of the status bar, but that has false positives.
Has anyone gotten something working for this?
A: Have you tried registering a RPScreenRecorderDelegate. There is a screenRecorderDidChangeAvailability callback.
https://developer.apple.com/documentation/replaykit/rpscreenrecorderdelegate?language=objc
|
stackoverflow
|
{
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:841153",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468061"
}
|
ee749dbfbfb841a50083cf2efad4f30ada2db7f1
|
Stackoverflow Stackexchange
Q: Unable to locate package libjasper-dev I want to install opencv in ubuntu 17.04 and I know that the jasper library is removed from ubuntu 17.04
what should I do to complete install opencv correctly ???
I tried used this two below command that showed here but it does not work
sudo apt-get install opencv-data
sudo apt-get install libopencv-dev
A: Try this answer
You will be able to install the libjasper-dev from a previous release
|
Q: Unable to locate package libjasper-dev I want to install opencv in ubuntu 17.04 and I know that the jasper library is removed from ubuntu 17.04
what should I do to complete install opencv correctly ???
I tried used this two below command that showed here but it does not work
sudo apt-get install opencv-data
sudo apt-get install libopencv-dev
A: Try this answer
You will be able to install the libjasper-dev from a previous release
A: Use these commands:
sudo add-apt-repository 'deb http://security.ubuntu.com/ubuntu xenial-security main'
sudo apt update
sudo apt install libjasper1 libjasper-dev
This worked on my Ubuntu 18.04 after I replaced the double-quotes with single quotes. With the double quotes I was getting this error:
Error: need a single repository as argument
A: Under Ubuntu18.04, if you directly add-apt-repository will encounter another GPG error.
$ sudo add-apt-repository "deb http://security.ubuntu.com/ubuntu xenial-security main"
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://dl.yarnpkg.com/debian stable InRelease: The following signatures were invalid: ...EXPKEYSIGhttps://dl.yarnpkg.com/debian/dists/stable/InRelease The following signatures were invalid: EXPKEYSIG ...
You have to update the key
sudo apt-key adv --refresh-keys --keyserver keyserver.ubuntu.com
Then you are now safe to install libjasper-dev.
sudo apt-get install libjasper-dev
Reference
A: To build the latest version of libjasper as package for Ubuntu do the following:
Download the Jasper source code from here: https://github.com/jasper-software/jasper/tree/version-2.0.25
Run the following script:
#!/bin/bash
VERSION=2.0.25
unzip jasper-version-$VERSION.zip
cd jasper-version-$VERSION
mkdir compile
SOURCE_DIR=`pwd`
BUILD_DIR=compile
INSTALL_DIR=/usr
OPTIONS=
cmake -G "Unix Makefiles" -H$SOURCE_DIR -B$BUILD_DIR -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR $OPTIONS
cd compile
make clean all
cat >description-pak <<EOF
JasPer Image Processing/Coding Tool Kit
EOF
fakeroot checkinstall --fstrans --install=no --pkgname=libjasper --pkgversion=$VERSION --pkgrelease 1 --pkglicense="JasPer 2.0" \
bash -c "make install" </dev/null
mv libjasper_$VERSION-1_amd64.deb ../..
cd ../..
rm -rf jasper-version-$VERSION
Result is a Debian package that can be installed using dpkg or apt.
A: This Solution was tested on mendel(debian) with arm64 architecture. If this works for Ubuntu is not clear.
Open terminal and run the following commands:
cd /etc/apt/sources.list.d
sudo nano multistrap-main.list
Add there these two lines:
deb http://ports.ubuntu.com/ubuntu-ports xenial-security main
deb http://ports.ubuntu.com/ubuntu-ports impish main
save and exit. Then run:
sudo apt update
If there is a key missing use the following and run again update:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <key>
Then install jasper:
sudo apt-get install libjasper-dev
Finally remove or comment out the added repositories from multistrap-main.list.
|
stackoverflow
|
{
"language": "en",
"length": 392,
"provenance": "stackexchange_0000F.jsonl.gz:841157",
"question_score": "19",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468081"
}
|
9535c4669813a5d7e5b9fcbdd660fd2b47bad2f7
|
Stackoverflow Stackexchange
Q: issue with transcript instruction in Pharo I have downloaded Pharo 6.0 and trying to follow up his famous or "infamous" book Pharo by Example (I call it like that because the books they give at their documentation page is never in syntony with the programming language)
In the book it says to open a playground and put the following instruction:
Transcript show: 'hello world'; cr.
I have selected the instruction and selected Do It, but nothing happens, only appears to press Ctrl+D as a shortcut and nothing more. I suppose that it should appear the Workspace with the message on it, but it is not working.
Any help with this?
A: It seems you have skipped a step.
From PBE 5 (http://files.pharo.org/books/updated-pharo-by-example/ )
Section 2.8
Let us start with some exercises:
*
*Close all open windows within Pharo.
*Open a Transcript and a Playground/workspace. (The Transcript can be
opened from the World > Tools > ... submenu.)
and then further down the page
Type the following text into the playground:
Transcript show: 'hello world'; cr.
the section also explains what both Transcript and Playground is.
|
Q: issue with transcript instruction in Pharo I have downloaded Pharo 6.0 and trying to follow up his famous or "infamous" book Pharo by Example (I call it like that because the books they give at their documentation page is never in syntony with the programming language)
In the book it says to open a playground and put the following instruction:
Transcript show: 'hello world'; cr.
I have selected the instruction and selected Do It, but nothing happens, only appears to press Ctrl+D as a shortcut and nothing more. I suppose that it should appear the Workspace with the message on it, but it is not working.
Any help with this?
A: It seems you have skipped a step.
From PBE 5 (http://files.pharo.org/books/updated-pharo-by-example/ )
Section 2.8
Let us start with some exercises:
*
*Close all open windows within Pharo.
*Open a Transcript and a Playground/workspace. (The Transcript can be
opened from the World > Tools > ... submenu.)
and then further down the page
Type the following text into the playground:
Transcript show: 'hello world'; cr.
the section also explains what both Transcript and Playground is.
A: Not near my books, so I don't know if this was missing in PBE or not, but I think it is straight-forward. You have successfully caused the Transcript to show text, but the Transcript isn't visible. There are three ways to make it so:
*
*From a playground, type and do the instruction
Transcript open
*From the world menu, select Tools/Transcript
*Use the keyboard shortcut Cmd-OT
Doing so will open the Transcript, which will then reveal the results of the "Transcript show:..."
Hope that helps.
A: Just my two cents. Be it even a bit late but perhaps it`l help someone somehow.
You can do this way:
Transcript
open;
show: 'your message';
cr
Or in case you whant to clear the window area before outputing a new content:
Transcript
clear;
open;
show: 'your message';
Of course you can type this all in one line
A: If you are familiar with javascript, transcript is similar to javascript console (actually it is other way around since Smalltalk and Transcript precede javascript and console, but I digress). So like console, you can show stuff to it all day long, but if you would like to see what is in it you have to open it.
|
stackoverflow
|
{
"language": "en",
"length": 389,
"provenance": "stackexchange_0000F.jsonl.gz:841158",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468087"
}
|
205a3cc7715695e1fe89dfc4dd5a78eeaace8016
|
Stackoverflow Stackexchange
Q: What is the purpose of setTitle(String title) method in ConstraintLayout 1.1.0? As of 1.1.0-beta1, ConstraintLayout source code includes field mTitle, setter and getter for it, also it can be set up via xml attribute. It's not used anywhere inside library (at least search find no occurrences).
What is the purpose of this field?
|
Q: What is the purpose of setTitle(String title) method in ConstraintLayout 1.1.0? As of 1.1.0-beta1, ConstraintLayout source code includes field mTitle, setter and getter for it, also it can be set up via xml attribute. It's not used anywhere inside library (at least search find no occurrences).
What is the purpose of this field?
|
stackoverflow
|
{
"language": "en",
"length": 54,
"provenance": "stackexchange_0000F.jsonl.gz:841167",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468128"
}
|
e3c5f5fe27003d2d5e17e873696aad45e5bcf066
|
Stackoverflow Stackexchange
Q: Eclipse auto-import not working anymore I use Eclipse Neon (on Mac OS) and work primarily with libgdx right now.
When I need to import some classes I do the CMD+Shift+O routine normally.
A non-libgdx class is imported fine, a libgdx class is only imported when I do it manually, so the build path is correct. Why does the "organize import"-command (applied by the key routine above) ignore libgdx?
I tried following steps:
*
*cleaning projects + restarting Eclipse
*restart computer
*refreshing gradle
*re-installing the same version of Eclipse
*installing an older version (Eclipse Mars)
Those steps don't solve the problem, maybe you have an idea?
|
Q: Eclipse auto-import not working anymore I use Eclipse Neon (on Mac OS) and work primarily with libgdx right now.
When I need to import some classes I do the CMD+Shift+O routine normally.
A non-libgdx class is imported fine, a libgdx class is only imported when I do it manually, so the build path is correct. Why does the "organize import"-command (applied by the key routine above) ignore libgdx?
I tried following steps:
*
*cleaning projects + restarting Eclipse
*restart computer
*refreshing gradle
*re-installing the same version of Eclipse
*installing an older version (Eclipse Mars)
Those steps don't solve the problem, maybe you have an idea?
|
stackoverflow
|
{
"language": "en",
"length": 106,
"provenance": "stackexchange_0000F.jsonl.gz:841190",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468218"
}
|
40e4eb35ca83fafba6f943ce1a05472f379ee8ba
|
Stackoverflow Stackexchange
Q: Android audio recording encoding/output combination for Bluemix I was recording audio on Android, but it was not being recognized on the IBM Bluemix Speech to Text side. IBM supports the following - https://www.ibm.com/watson/developercloud/doc/speech-to-text/input.html :
*
*audio/flac
*audio/l16
*audio/wav
*audio/ogg;codecs=vorbis
*audio/webm;codecs=vorbis
*audio/mulaw
*audio/basic
I tried hard to get ogg/webm vorbis recording on Android, but it's not working. I was wondering if you could help me understand which of these types I could use that would work with the Bluemix API.
For encoding options I have:
MediaRecorder.AudioEncoder.AAC
MediaRecorder.AudioEncoder.AAC_ELD
MediaRecorder.AudioEncoder.AMR_NB
MediaRecorder.AudioEncoder.AMR_WB
MediaRecorder.AudioEncoder.HE_AAC
MediaRecorder.AudioEncoder.VORBIS
MediaRecorder.AudioEncoder.DEFAULT
For output format options I have:
MediaRecorder.OutputFormat.MPEG_4
MediaRecorder.OutputFormat.AAC_ADTS
MediaRecorder.OutputFormat.AMR_NB
MediaRecorder.OutputFormat.AMR_WB
MediaRecorder.OutputFormat.THREE_GPP
MediaRecorder.OutputFormat.WEBM
MediaRecorder.OutputFormat.DEFAULT
I have not been able to find the right combination that works with Bluemix API. Please recommend some combinations to try. Is there one?
Thanks
|
Q: Android audio recording encoding/output combination for Bluemix I was recording audio on Android, but it was not being recognized on the IBM Bluemix Speech to Text side. IBM supports the following - https://www.ibm.com/watson/developercloud/doc/speech-to-text/input.html :
*
*audio/flac
*audio/l16
*audio/wav
*audio/ogg;codecs=vorbis
*audio/webm;codecs=vorbis
*audio/mulaw
*audio/basic
I tried hard to get ogg/webm vorbis recording on Android, but it's not working. I was wondering if you could help me understand which of these types I could use that would work with the Bluemix API.
For encoding options I have:
MediaRecorder.AudioEncoder.AAC
MediaRecorder.AudioEncoder.AAC_ELD
MediaRecorder.AudioEncoder.AMR_NB
MediaRecorder.AudioEncoder.AMR_WB
MediaRecorder.AudioEncoder.HE_AAC
MediaRecorder.AudioEncoder.VORBIS
MediaRecorder.AudioEncoder.DEFAULT
For output format options I have:
MediaRecorder.OutputFormat.MPEG_4
MediaRecorder.OutputFormat.AAC_ADTS
MediaRecorder.OutputFormat.AMR_NB
MediaRecorder.OutputFormat.AMR_WB
MediaRecorder.OutputFormat.THREE_GPP
MediaRecorder.OutputFormat.WEBM
MediaRecorder.OutputFormat.DEFAULT
I have not been able to find the right combination that works with Bluemix API. Please recommend some combinations to try. Is there one?
Thanks
|
stackoverflow
|
{
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:841200",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468280"
}
|
35c17d08ac1e27feb435636d3f1f59aea8c2770f
|
Stackoverflow Stackexchange
Q: How to POS_TAG a french sentence? I'm looking for a way to pos_tag a French sentence like the following code is used for English sentences:
def pos_tagging(sentence):
var = sentence
exampleArray = [var]
for item in exampleArray:
tokenized = nltk.word_tokenize(item)
tagged = nltk.pos_tag(tokenized)
return tagged
A: The NLTK doesn't come with pre-built resources for French. I recommend using the Stanford tagger, which comes with a trained French model. This code shows how you might set up the nltk for use with Stanford's French POS tagger. Note that the code is outdated (and for Python 2), but you could use it as a starting point.
Alternately, the NLTK makes it very easy to train your own POS tagger on a tagged corpus, and save it for later use. If you have access to a (sufficiently large) French corpus, you can follow the instructions in the nltk book and simply use your corpus in place of the Brown corpus. You're unlikely to match the performance of the Stanford tagger (unless you can train a tagger for your specific domain), but you won't have to install anything.
|
Q: How to POS_TAG a french sentence? I'm looking for a way to pos_tag a French sentence like the following code is used for English sentences:
def pos_tagging(sentence):
var = sentence
exampleArray = [var]
for item in exampleArray:
tokenized = nltk.word_tokenize(item)
tagged = nltk.pos_tag(tokenized)
return tagged
A: The NLTK doesn't come with pre-built resources for French. I recommend using the Stanford tagger, which comes with a trained French model. This code shows how you might set up the nltk for use with Stanford's French POS tagger. Note that the code is outdated (and for Python 2), but you could use it as a starting point.
Alternately, the NLTK makes it very easy to train your own POS tagger on a tagged corpus, and save it for later use. If you have access to a (sufficiently large) French corpus, you can follow the instructions in the nltk book and simply use your corpus in place of the Brown corpus. You're unlikely to match the performance of the Stanford tagger (unless you can train a tagger for your specific domain), but you won't have to install anything.
A: here is the full code source it works very well
download link for Standford NLP https://nlp.stanford.edu/software/tagger.shtml#About
from nltk.tag import StanfordPOSTagger
jar = 'C:/Users/m.ferhat/Desktop/stanford-postagger-full-2016-10-31/stanford-postagger-3.7.0.jar'
model = 'C:/Users/m.ferhat/Desktop/stanford-postagger-full-2016-10-31/models/french.tagger'
import os
java_path = "C:/Program Files/Java/jdk1.8.0_121/bin/java.exe"
os.environ['JAVAHOME'] = java_path
pos_tagger = StanfordPOSTagger(model, jar, encoding='utf8' )
res = pos_tagger.tag('je suis libre'.split())
print (res)
|
stackoverflow
|
{
"language": "en",
"length": 234,
"provenance": "stackexchange_0000F.jsonl.gz:841209",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468300"
}
|
fbb358b5e9bf1c3aac41732b5b7cf8ac995c144a
|
Stackoverflow Stackexchange
Q: Access to WrappedArray elements I have a spark dataframe and here is the schema:
|-- eid: long (nullable = true)
|-- age: long (nullable = true)
|-- sex: long (nullable = true)
|-- father: array (nullable = true)
| |-- element: array (containsNull = true)
| | |-- element: long (containsNull = true)
and a sample of rows:.
df.select(df['father']).show()
+--------------------+
| father|
+--------------------+
|[WrappedArray(-17...|
|[WrappedArray(-11...|
|[WrappedArray(13,...|
+--------------------+
and the type is
DataFrame[father: array<array<bigint>>]
How can I have access to each element of inner array? For example -17 in the first row?
I tried different things like df.select(df['father'])(0)(0).show() but no luck.
A: If I'm not mistaken, the syntax for in Python is
df.select(df['father'])[0][0].show()
or
df.select(df['father']).getItem(0).getItem(0).show()
See some examples here: http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=column#pyspark.sql.Column
|
Q: Access to WrappedArray elements I have a spark dataframe and here is the schema:
|-- eid: long (nullable = true)
|-- age: long (nullable = true)
|-- sex: long (nullable = true)
|-- father: array (nullable = true)
| |-- element: array (containsNull = true)
| | |-- element: long (containsNull = true)
and a sample of rows:.
df.select(df['father']).show()
+--------------------+
| father|
+--------------------+
|[WrappedArray(-17...|
|[WrappedArray(-11...|
|[WrappedArray(13,...|
+--------------------+
and the type is
DataFrame[father: array<array<bigint>>]
How can I have access to each element of inner array? For example -17 in the first row?
I tried different things like df.select(df['father'])(0)(0).show() but no luck.
A: If I'm not mistaken, the syntax for in Python is
df.select(df['father'])[0][0].show()
or
df.select(df['father']).getItem(0).getItem(0).show()
See some examples here: http://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=column#pyspark.sql.Column
A: The solution in scala should be as
import org.apache.spark.sql.functions._
val data = sparkContext.parallelize("""{"eid":1,"age":30,"sex":1,"father":[[1,2]]}""" :: Nil)
val dataframe = sqlContext.read.json(data).toDF()
the dataframe looks as
+---+---+---+--------------------+
|eid|age|sex|father |
+---+---+---+--------------------+
|1 |30 |1 |[WrappedArray(1, 2)]|
+---+---+---+--------------------+
the solution should be
dataframe.select(col("father")(0)(0) as("first"), col("father")(0)(1) as("second")).show(false)
output should be
+-----+------+
|first|second|
+-----+------+
|1 |2 |
+-----+------+
A: Another scala answer would look like this:
df.select(col("father").getItem(0) as "father_0", col("father").getItem(1) as "father_1")
|
stackoverflow
|
{
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:841214",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468311"
}
|
e388a3dc15a6d3d10edfc91bbf62a8030d2b459d
|
Stackoverflow Stackexchange
Q: Concurrent execution in Spark Streaming I have a Spark Streaming job to do some aggregations on an incoming Kafka Stream and save the result in Hive. However, I have about 5 Spark SQL to be run on the incoming data, which can be run concurrently as there is no dependency in transformations among these 5 and if possible, I would like to run them in concurrent fashion without waiting for the first SQL to end. They all go to separate Hive tables. For example :
// This is the Kafka inbound stream
// Code in Consumer
val stream = KafkaUtils.createDirectStream[..](...)
val metric1= Future {
computeFuture(stream, dataframe1, countIndex)
}
val metric2= Future {
computeFuture(stream, dataframe2, countIndex)
}
val metric3= Future {
computeFirstFuture(stream, dataframe3, countIndex)
}
val metric4= Future {
computeFirstFuture(stream, dataframe4, countIndex)
}
metric1.onFailure {
case e => logger.error(s"Future failed with an .... exception", e)
}
metric2.onFailure {
case e => logger.error(s"Future failed with an .... exception", e)
}
....and so on
On doing the above, the actions in Future are appearing sequential (from Spark url interface). How can I enforce concurrent execution? Using Spark 2.0, Scala 2.11.8. Do I need to create separate spark sessions using .newSession() ?
|
Q: Concurrent execution in Spark Streaming I have a Spark Streaming job to do some aggregations on an incoming Kafka Stream and save the result in Hive. However, I have about 5 Spark SQL to be run on the incoming data, which can be run concurrently as there is no dependency in transformations among these 5 and if possible, I would like to run them in concurrent fashion without waiting for the first SQL to end. They all go to separate Hive tables. For example :
// This is the Kafka inbound stream
// Code in Consumer
val stream = KafkaUtils.createDirectStream[..](...)
val metric1= Future {
computeFuture(stream, dataframe1, countIndex)
}
val metric2= Future {
computeFuture(stream, dataframe2, countIndex)
}
val metric3= Future {
computeFirstFuture(stream, dataframe3, countIndex)
}
val metric4= Future {
computeFirstFuture(stream, dataframe4, countIndex)
}
metric1.onFailure {
case e => logger.error(s"Future failed with an .... exception", e)
}
metric2.onFailure {
case e => logger.error(s"Future failed with an .... exception", e)
}
....and so on
On doing the above, the actions in Future are appearing sequential (from Spark url interface). How can I enforce concurrent execution? Using Spark 2.0, Scala 2.11.8. Do I need to create separate spark sessions using .newSession() ?
|
stackoverflow
|
{
"language": "en",
"length": 199,
"provenance": "stackexchange_0000F.jsonl.gz:841278",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468497"
}
|
4367d70f513fa8ace75ddf0845f39bbef5f0d1cc
|
Stackoverflow Stackexchange
Q: Image React-native not working Iam trying to do this:
<Image
style={styles.image}
source={require(`./img/${params.image}.png`)}
/>
but returns me this error: "Unknown named module"
A: Try importing your image at top like this:
import React, { Component } from "react";
import bgimg from "./assets/bg.jpg";
Then you can use it like this:
<Image source={bgimg}>
If this doesn't work, can you please share the directory structure, please show the location of the file you are doing the require and the location of this image in the directory structure.
|
Q: Image React-native not working Iam trying to do this:
<Image
style={styles.image}
source={require(`./img/${params.image}.png`)}
/>
but returns me this error: "Unknown named module"
A: Try importing your image at top like this:
import React, { Component } from "react";
import bgimg from "./assets/bg.jpg";
Then you can use it like this:
<Image source={bgimg}>
If this doesn't work, can you please share the directory structure, please show the location of the file you are doing the require and the location of this image in the directory structure.
A: I have also faced this issue, you can not call image like
source={require(`./img/${params.image}.png`)}
you have to use another variable to store path and then call it (including require)
For example:
let imagePath = require("../../assets/list.png");
Also note that you can not call variable into require.
check this ref. url: https://github.com/facebook/react-native/issues/2481
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:841280",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468500"
}
|
a297494e5a738544b9d669e5e15132464b5ed4c7
|
Stackoverflow Stackexchange
Q: How to pass a variable from ModelSerializer.update() to ModelViewSet.update() in Django REST Framework I need to pass a return value of a custom model update method in the view response.
In my serializer I want to do:
class Serializer(ModelSerializer):
def update(self, instance, validated_data):
something_special = validated_data.pop('something_special')
important_info = model.update_something_special(something_special)
for attr, value in validated_data.items():
setattr(instance, attr, value)
instance.save()
return instance
And now in my view I'd like to return important_info in the response:
class View(ModelViewSet):
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
important_info = ???
return Response(serializer.data)
Is this possible in Django REST or is this a dead end? If so, how to do this differently?
A: class Serializer(ModelSerializer):
important_info = None
def update(self, instance, validated_data):
something_special = validated_data.pop('something_special')
self.important_info = model.update_something_special(something_special)
for attr, value in validated_data.items():
setattr(instance, attr, value)
instance.save()
return instance
class View(ModelViewSet):
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
important_info = serializer.important_info
return Response(serializer.data)
|
Q: How to pass a variable from ModelSerializer.update() to ModelViewSet.update() in Django REST Framework I need to pass a return value of a custom model update method in the view response.
In my serializer I want to do:
class Serializer(ModelSerializer):
def update(self, instance, validated_data):
something_special = validated_data.pop('something_special')
important_info = model.update_something_special(something_special)
for attr, value in validated_data.items():
setattr(instance, attr, value)
instance.save()
return instance
And now in my view I'd like to return important_info in the response:
class View(ModelViewSet):
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
important_info = ???
return Response(serializer.data)
Is this possible in Django REST or is this a dead end? If so, how to do this differently?
A: class Serializer(ModelSerializer):
important_info = None
def update(self, instance, validated_data):
something_special = validated_data.pop('something_special')
self.important_info = model.update_something_special(something_special)
for attr, value in validated_data.items():
setattr(instance, attr, value)
instance.save()
return instance
class View(ModelViewSet):
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
important_info = serializer.important_info
return Response(serializer.data)
|
stackoverflow
|
{
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:841309",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468579"
}
|
fed9ef15fae72e414a5445406e72108052008825
|
Stackoverflow Stackexchange
Q: API request initiated by polyfills.bundle.js whenever I make a API request from my angular2 app, I see there were two requests made in the browser console. The first one, I believe is an internal XHR request initiated by polyfills.bundle.js and the second one is the actual API call to return the response. I also note that these are not asynchronous calls. I see the first call initiated by polyfills.bundle.js is costing application performance. what is the purpose of this request? Is there a way I can skip the call initiated by polyfills.ts?
A: That might be a preflight request if it is crossdomain that happens
|
Q: API request initiated by polyfills.bundle.js whenever I make a API request from my angular2 app, I see there were two requests made in the browser console. The first one, I believe is an internal XHR request initiated by polyfills.bundle.js and the second one is the actual API call to return the response. I also note that these are not asynchronous calls. I see the first call initiated by polyfills.bundle.js is costing application performance. what is the purpose of this request? Is there a way I can skip the call initiated by polyfills.ts?
A: That might be a preflight request if it is crossdomain that happens
|
stackoverflow
|
{
"language": "en",
"length": 106,
"provenance": "stackexchange_0000F.jsonl.gz:841335",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468659"
}
|
d4db767239f9180c871717209148463f1d15f9a7
|
Stackoverflow Stackexchange
Q: Remove from xmlnode My XML could look like this:
<div>
<p>
First Text
<br/>
Second Text
</p>
</div>
Loading the xml-file, going through all nodes with the following code:
XmlDocument doc = new XmlDocument();
doc.Load(filepath);
foreach (XmlNode row in doc.SelectNodes("/div/p"))
{
string subtext = row.InnerText;
richtextbox.AppendText(subtext + "\n");
}
The result will always look like this:
First TextSecond Text
Now the problem obviously is, that there's no space (or even a line break) between the first & second text. So, is there a way to replace that <br/> with a space/line break?
A: You can use the following XPath:
doc.SelectNodes("/div/p/text()")
It gives you two text nodes before and after br tag.
|
Q: Remove from xmlnode My XML could look like this:
<div>
<p>
First Text
<br/>
Second Text
</p>
</div>
Loading the xml-file, going through all nodes with the following code:
XmlDocument doc = new XmlDocument();
doc.Load(filepath);
foreach (XmlNode row in doc.SelectNodes("/div/p"))
{
string subtext = row.InnerText;
richtextbox.AppendText(subtext + "\n");
}
The result will always look like this:
First TextSecond Text
Now the problem obviously is, that there's no space (or even a line break) between the first & second text. So, is there a way to replace that <br/> with a space/line break?
A: You can use the following XPath:
doc.SelectNodes("/div/p/text()")
It gives you two text nodes before and after br tag.
|
stackoverflow
|
{
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:841345",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468688"
}
|
0bf5f9754194848c1cb1ba9f8a9f774f7e44b8f2
|
Stackoverflow Stackexchange
Q: Account kit: Set the default language for iOS app How to change the language in Facebook-Account-Kit for iOS?
A: Facebook said that: Localization support is also provided by the SDK. The supported languages are packaged with the SDK. You don't need anything else to display text in the appropriate locale.
But it's not enough, you must add your language in Project Info like image below
|
Q: Account kit: Set the default language for iOS app How to change the language in Facebook-Account-Kit for iOS?
A: Facebook said that: Localization support is also provided by the SDK. The supported languages are packaged with the SDK. You don't need anything else to display text in the appropriate locale.
But it's not enough, you must add your language in Project Info like image below
A: I just change the simulator/device language. The display language on AccountKIT UI will be changed automatically . It worked for me
I also don't need to add localization config as "Thành Ngô Văn" answer
A: I had changed the device language. Now working fine.
|
stackoverflow
|
{
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:841347",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468690"
}
|
c8bb7aa90582179fbf275372a9935997955ef967
|
Stackoverflow Stackexchange
Q: weblogic.management.ManagementException: There is the same running task. I am trying to deploy an ear file into Weblogic 12.c application server. I was able to deploy it successfully, but the last time, it got the connection to the database.
After that it keeps giving me this exception and the ear does not deploy any more.
The deployment has not been installed.
weblogic.management.ManagementException: There is the same running task. New Task: (deploy for my-package), Running Task: (deploy for my-package)
This happens even after I have restarted my server and did a clean deploy.
Please help.
Thanks
gmk
A: This is because weblogic adminserver was holding a lock on the deploy task. Restart the AdminServer and the error goes away.
|
Q: weblogic.management.ManagementException: There is the same running task. I am trying to deploy an ear file into Weblogic 12.c application server. I was able to deploy it successfully, but the last time, it got the connection to the database.
After that it keeps giving me this exception and the ear does not deploy any more.
The deployment has not been installed.
weblogic.management.ManagementException: There is the same running task. New Task: (deploy for my-package), Running Task: (deploy for my-package)
This happens even after I have restarted my server and did a clean deploy.
Please help.
Thanks
gmk
A: This is because weblogic adminserver was holding a lock on the deploy task. Restart the AdminServer and the error goes away.
A: In case you don't want to restart, you can delete a failed deployment from the Weblogic console, activate all the changes, and retry the deployment.
|
stackoverflow
|
{
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:841350",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468698"
}
|
e083e40c0ebe3e8c1e6a62807d52570ff2a04558
|
Stackoverflow Stackexchange
Q: Android Studio GPU Monitoring Not Working I'm using v2.3.1 of Android Studio. I have a Nexus 6P device running Android 7.1.2, and I've went to Settings -> Developer Options -> Profile GPU Rendering and turned on adb shell dumpsys gfxinfo. Pausing and unpausing GPU in the Monitors tab does nothing but show a timeline with no data. I have selected the correct device, and package in the Monitor dropdown. I've tried killing and restarting adb. I've tried restarting Android Studio, and my phone. I'm at a loss as to why it isn't working.
|
Q: Android Studio GPU Monitoring Not Working I'm using v2.3.1 of Android Studio. I have a Nexus 6P device running Android 7.1.2, and I've went to Settings -> Developer Options -> Profile GPU Rendering and turned on adb shell dumpsys gfxinfo. Pausing and unpausing GPU in the Monitors tab does nothing but show a timeline with no data. I have selected the correct device, and package in the Monitor dropdown. I've tried killing and restarting adb. I've tried restarting Android Studio, and my phone. I'm at a loss as to why it isn't working.
|
stackoverflow
|
{
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:841358",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468710"
}
|
460d586f1164b4b9667faf8c0d626bbd02827e9d
|
Stackoverflow Stackexchange
Q: sbatch -time Not Setting Time Limit Correctly I'm working on a large research cluster that has a default 30 minute timeout for any batch jobs submitted. I'm submitting my job with the line:
sbatch -p batch --job-name=$JOBNAME --time=1-00:00:00 --mail-type=ALL --mail-user=[my email] --wrap="math -run \"<<scriptName.wl\""
but it still times out after 30 minutes? I'm just trying to set a 1-day time limit (which is much more than it should need), and I have no idea why this wouldn't be working.
|
Q: sbatch -time Not Setting Time Limit Correctly I'm working on a large research cluster that has a default 30 minute timeout for any batch jobs submitted. I'm submitting my job with the line:
sbatch -p batch --job-name=$JOBNAME --time=1-00:00:00 --mail-type=ALL --mail-user=[my email] --wrap="math -run \"<<scriptName.wl\""
but it still times out after 30 minutes? I'm just trying to set a 1-day time limit (which is much more than it should need), and I have no idea why this wouldn't be working.
|
stackoverflow
|
{
"language": "en",
"length": 80,
"provenance": "stackexchange_0000F.jsonl.gz:841364",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468721"
}
|
592d7abb812119b16d9dd6748d5e9e1e609d583b
|
Stackoverflow Stackexchange
Q: Angular 2 Directive to set input field to uppercase using ngModelChange Please help. I'm having trouble creating a directive that will always set text inputs to uppercase. It seems to be working looking at the user interface but the model binding is showing the last typed character to still in lowercase character.
below is a portion of my html:
<div>
<md-input-container fxFlex>
<textarea #listCode mdInput [(ngModel)]="listInfo.code" placeholder="List Code"
uppercase-code maxlength="50" rows="3"
required></textarea>
<md-hint align="end">{{listCode.value.length}} / 50</md-hint>
</md-input-container>
{{listInfo.code}}
</div>
below is the directive:
import { Directive } from '@angular/core';
import { NgControl } from '@angular/forms';
@Directive({
selector: '[ngModel][uppercase-code]',
host: {
'(ngModelChange)': 'ngOnChanges($event)'
}
})
export class UppercaseCodeDirective {
constructor(public model: NgControl) {}
ngOnChanges(event) {
var newVal = event.replace(/[^A-Za-z0-9_]*/g, '');
newVal = newVal.toUpperCase();
this.model.valueAccessor.writeValue(newVal);
}
}
A: You should be using a directive as below,
@HostListener('keyup') onKeyUp() {
this.el.nativeElement.value = this.el.nativeElement.value.toUpperCase();
}
LIVE DEMO
|
Q: Angular 2 Directive to set input field to uppercase using ngModelChange Please help. I'm having trouble creating a directive that will always set text inputs to uppercase. It seems to be working looking at the user interface but the model binding is showing the last typed character to still in lowercase character.
below is a portion of my html:
<div>
<md-input-container fxFlex>
<textarea #listCode mdInput [(ngModel)]="listInfo.code" placeholder="List Code"
uppercase-code maxlength="50" rows="3"
required></textarea>
<md-hint align="end">{{listCode.value.length}} / 50</md-hint>
</md-input-container>
{{listInfo.code}}
</div>
below is the directive:
import { Directive } from '@angular/core';
import { NgControl } from '@angular/forms';
@Directive({
selector: '[ngModel][uppercase-code]',
host: {
'(ngModelChange)': 'ngOnChanges($event)'
}
})
export class UppercaseCodeDirective {
constructor(public model: NgControl) {}
ngOnChanges(event) {
var newVal = event.replace(/[^A-Za-z0-9_]*/g, '');
newVal = newVal.toUpperCase();
this.model.valueAccessor.writeValue(newVal);
}
}
A: You should be using a directive as below,
@HostListener('keyup') onKeyUp() {
this.el.nativeElement.value = this.el.nativeElement.value.toUpperCase();
}
LIVE DEMO
A: This question has somehow already been answered on SO, here, although solutions piled up together with framework newer versions.
At least in my experience, there were two useful answers, which on their own did not work, anyway: from Thierry Templier (with first comment as well), and from cal.
I put together parts of both, and came up with this version, which is now working with Angular 4.1.1 in a reactive form:
import { Directive, Renderer, ElementRef, forwardRef } from '@angular/core';
import { NG_VALUE_ACCESSOR, DefaultValueAccessor } from '@angular/forms';
const LOWERCASE_INPUT_CONTROL_VALUE_ACCESSOR = {
provide: NG_VALUE_ACCESSOR,
useExisting: forwardRef(() => LowerCaseInputDirective),
multi: true,
};
@Directive({
selector: 'input[lowercase]',
host: {
// When the user updates the input
'(input)': 'onInput($event.target.value)',
'(blur)': 'onTouched()',
},
providers: [
LOWERCASE_INPUT_CONTROL_VALUE_ACCESSOR,
],
})
export class LowerCaseInputDirective extends DefaultValueAccessor {
constructor(renderer: Renderer, elementRef: ElementRef) {
super(renderer, elementRef, false);
}
writeValue(value: any): void {
const transformed = this.transformValue(value);
super.writeValue(transformed);
}
onInput(value: any): void {
const transformed = this.transformValue(value);
super.writeValue(transformed);
this.onChange(transformed);
}
private transformValue(value: any): any {
const result = value && typeof value === 'string'
? value.toLowerCase()
: value;
return result;
}
}
This is for lower-case, but everything holds for upper-case as well, just rename directive, replace within selector and transformValue.
|
stackoverflow
|
{
"language": "en",
"length": 344,
"provenance": "stackexchange_0000F.jsonl.gz:841365",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468727"
}
|
a525b1014097af34d4f2b04b3665e2358a0adf17
|
Stackoverflow Stackexchange
Q: Import timestamp in proto file of protobuf for GRPC I am trying to use gRPC and using the exchange message format as protobuf.
I wanted to add timestamp field inside my message.But i don't know the right way to add the import statement in proto file. I am using Golang so I need an output as .pb.go
A: Make sure to import in your proto file:
import "google/protobuf/timestamp.proto";
And use the type for you variable like:
google.protobuf.Timestamp time_name = 1;
|
Q: Import timestamp in proto file of protobuf for GRPC I am trying to use gRPC and using the exchange message format as protobuf.
I wanted to add timestamp field inside my message.But i don't know the right way to add the import statement in proto file. I am using Golang so I need an output as .pb.go
A: Make sure to import in your proto file:
import "google/protobuf/timestamp.proto";
And use the type for you variable like:
google.protobuf.Timestamp time_name = 1;
A: In your proto file:
import "google/protobuf/timestamp.proto"
Based on the documentation, that should be all that's necessary.
A: Grpc does not have a timestamp AFAIK.
I usually use the Unix Epoch - the go function
Unix(sec int64, nsec int64)
and
func (t Time) Unix() int64
is your fried
A: You can import timestamp from the ptypes package: in the standard Go Protobuf repo.
import (
"github.com/golang/protobuf/ptypes/timestamp"
)
|
stackoverflow
|
{
"language": "en",
"length": 148,
"provenance": "stackexchange_0000F.jsonl.gz:841369",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468734"
}
|
9f1a839f6e071547c9c955b5cae3c601fe1e5d9f
|
Stackoverflow Stackexchange
Q: Is there any way to create new scope in php without creating a new function? In java we can create a new scope like this:
{
//create new scope
{
// create another new scope
}
}
But can I do this in PHP too to create new nested scope?
A: PHP creates scope for a whole function (or required file, as mentioned by others in comments). You can simulate nested scope by creating nested anonymous function like this:
function scope_outer() {
$outer_a = 'oa';
$outer_b = 'ob';
$inner_a = 'xxx'; // just to check if the nested function overwrites this
// define anonymous function to simulate nested scope and run it right away
(function() use($outer_b) {
$inner_a = 'ia';
var_dump($outer_a); // error, variable not visible in this scope
var_dump($outer_b); // "ob"
var_dump($inner_a); // "ia"
})();
var_dump($inner_a); // "xxx" => nested "scope" did *not* overwrite this variable
}
scope_outer();
The fiddle to play with: http://sandbox.onlinephpfunctions.com/code/7b4449fe47cc48aefa61294883400a42659de4c6
|
Q: Is there any way to create new scope in php without creating a new function? In java we can create a new scope like this:
{
//create new scope
{
// create another new scope
}
}
But can I do this in PHP too to create new nested scope?
A: PHP creates scope for a whole function (or required file, as mentioned by others in comments). You can simulate nested scope by creating nested anonymous function like this:
function scope_outer() {
$outer_a = 'oa';
$outer_b = 'ob';
$inner_a = 'xxx'; // just to check if the nested function overwrites this
// define anonymous function to simulate nested scope and run it right away
(function() use($outer_b) {
$inner_a = 'ia';
var_dump($outer_a); // error, variable not visible in this scope
var_dump($outer_b); // "ob"
var_dump($inner_a); // "ia"
})();
var_dump($inner_a); // "xxx" => nested "scope" did *not* overwrite this variable
}
scope_outer();
The fiddle to play with: http://sandbox.onlinephpfunctions.com/code/7b4449fe47cc48aefa61294883400a42659de4c6
|
stackoverflow
|
{
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:841374",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468754"
}
|
9f88176b2872cac927ecb810e4301b3f121eecbf
|
Stackoverflow Stackexchange
Q: Database.Logger.Level Enum Values Not Accessible in Version 11.0.0 Update 30 June:
This problem is corrected in version 11.0.2.
Prior to Firebase version 11.0.0, the enum values of Database.Logger.Level were directly accessible. An example that compiles with 10.2.6 is:
FirebaseDatabase.getInstance().setLogLevel(Logger.Level.DEBUG);
That statement does not compile using version 11.0.0. A workaround is to use valueOf():
FirebaseDatabase.getInstance().setLogLevel(Logger.Level.valueOf("DEBUG"));
In 11.0.0, the decompiled .class file for Database.Logger is:
public interface Logger {
public static enum Level {
zzcbX,
zzcbY,
zzcbZ,
zzcca,
zzccb;
private Level() {
}
}
}
In 10.2.6, it's:
public interface Logger {
public static enum Level {
DEBUG,
INFO,
WARN,
ERROR,
NONE;
private Level() {
}
}
}
Is use of valueOf() the appropriate workaround until the enum values are accessible again?
A: firebaser here
This is a known bug in version 11.0.0 and 11.0.1 of the Android SDK. It should be fixed in version 11.0.2, which is due by early July.
|
Q: Database.Logger.Level Enum Values Not Accessible in Version 11.0.0 Update 30 June:
This problem is corrected in version 11.0.2.
Prior to Firebase version 11.0.0, the enum values of Database.Logger.Level were directly accessible. An example that compiles with 10.2.6 is:
FirebaseDatabase.getInstance().setLogLevel(Logger.Level.DEBUG);
That statement does not compile using version 11.0.0. A workaround is to use valueOf():
FirebaseDatabase.getInstance().setLogLevel(Logger.Level.valueOf("DEBUG"));
In 11.0.0, the decompiled .class file for Database.Logger is:
public interface Logger {
public static enum Level {
zzcbX,
zzcbY,
zzcbZ,
zzcca,
zzccb;
private Level() {
}
}
}
In 10.2.6, it's:
public interface Logger {
public static enum Level {
DEBUG,
INFO,
WARN,
ERROR,
NONE;
private Level() {
}
}
}
Is use of valueOf() the appropriate workaround until the enum values are accessible again?
A: firebaser here
This is a known bug in version 11.0.0 and 11.0.1 of the Android SDK. It should be fixed in version 11.0.2, which is due by early July.
|
stackoverflow
|
{
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:841391",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468817"
}
|
23975144e650eb14f2d8c3d7c84bb511d33e5cc9
|
Stackoverflow Stackexchange
Q: Setting an empty value for a text input I have the following JS :
document.getElementById('sketchpad-post').setAttribute('value','')
The HTML input is as follow:
<input type="text" id="sketchpad-post" autocomplete="off" value="" placeholder="Message"/>
If the second argument of the setAttribute function is an empty string, like in the example above, it doesn’t work : it doesn’t empty the text field (the text field has a previously set value).
Now if the second argument is a non-empty string, then, it works : it sets my text field to the provided value.
I find this behavior particulary strange…
I tried to enforce autocomplete="off" (and even autocomplete="flu") doing a setAttribute and also to do a removeAttribute('value') but I still cannot manage to have this field blank when the user display it.
As a workaround I can set the value to a kind of placeholder like '…' or whatever other character (an non-breakable space maybe?) but it’s not very nice.
I have this behavior in both latest Chrome (Chromium) and Firefox.
Any idea ?
A: document.getElementById('sketchpad-post').value = "";
|
Q: Setting an empty value for a text input I have the following JS :
document.getElementById('sketchpad-post').setAttribute('value','')
The HTML input is as follow:
<input type="text" id="sketchpad-post" autocomplete="off" value="" placeholder="Message"/>
If the second argument of the setAttribute function is an empty string, like in the example above, it doesn’t work : it doesn’t empty the text field (the text field has a previously set value).
Now if the second argument is a non-empty string, then, it works : it sets my text field to the provided value.
I find this behavior particulary strange…
I tried to enforce autocomplete="off" (and even autocomplete="flu") doing a setAttribute and also to do a removeAttribute('value') but I still cannot manage to have this field blank when the user display it.
As a workaround I can set the value to a kind of placeholder like '…' or whatever other character (an non-breakable space maybe?) but it’s not very nice.
I have this behavior in both latest Chrome (Chromium) and Firefox.
Any idea ?
A: document.getElementById('sketchpad-post').value = "";
|
stackoverflow
|
{
"language": "en",
"length": 169,
"provenance": "stackexchange_0000F.jsonl.gz:841392",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468824"
}
|
07fcc632e6ff247eaba535b8652ab7c5dbee3a87
|
Stackoverflow Stackexchange
Q: Travis CI, error on npm test using mocha I'm trying to use Travis CI to run my mocha + chai tests for the first time and I cant seem to figure out why this is happening.
When the Travis build runs:
mocha
sh: 1: mocha: not found
The command "npm test" exited with 1.
.travis.yml
language: node_js
node_js:
- "8"
package.json (not the whole thing)
"scripts": {
"test": "mocha"
},
"Dependencies": {
"mocha": "3.4.2",
"chai": "4.0.2"
},
I also tried the test being: "test": "./node_modules/.bin/mocha" but that didn't work either.
Thanks for your help!
EDIT:
I'm not the smartest.... had Dependencies instead of dependencies (left over from when it said devDependencies!)
A: The way I solved this is that I went to the menu button of travis, then I went to cache, and I cleared it... when you cleared the cache on travis, it tries to download it, then it fails, then does npm install to install all the dependancies again.
|
Q: Travis CI, error on npm test using mocha I'm trying to use Travis CI to run my mocha + chai tests for the first time and I cant seem to figure out why this is happening.
When the Travis build runs:
mocha
sh: 1: mocha: not found
The command "npm test" exited with 1.
.travis.yml
language: node_js
node_js:
- "8"
package.json (not the whole thing)
"scripts": {
"test": "mocha"
},
"Dependencies": {
"mocha": "3.4.2",
"chai": "4.0.2"
},
I also tried the test being: "test": "./node_modules/.bin/mocha" but that didn't work either.
Thanks for your help!
EDIT:
I'm not the smartest.... had Dependencies instead of dependencies (left over from when it said devDependencies!)
A: The way I solved this is that I went to the menu button of travis, then I went to cache, and I cleared it... when you cleared the cache on travis, it tries to download it, then it fails, then does npm install to install all the dependancies again.
|
stackoverflow
|
{
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:841406",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468867"
}
|
e02a9957541fa00ac4a18100bb233af4aec7d5f6
|
Stackoverflow Stackexchange
Q: What's the right content-type for a Java jar file? I'm uploading some jars into s3 and want to set the right content-type headers for them.
I looked through what I thought was a comprehensive list, and was unable to find any mention of jar.
A: I have seen instances of these content-type headers for JAR files:
application/java-archive
application/x-java-archive
application/x-jar
|
Q: What's the right content-type for a Java jar file? I'm uploading some jars into s3 and want to set the right content-type headers for them.
I looked through what I thought was a comprehensive list, and was unable to find any mention of jar.
A: I have seen instances of these content-type headers for JAR files:
application/java-archive
application/x-java-archive
application/x-jar
A: Oh, wikipedia says it's application/java-archive though I don't see that in any rfc or standards document.
A: I found a list which might be very useful to some.
You can find a more complete (but still incomplete) list of MIME-Types here from the official Mozilla Developers Network.
For Java Archive (JAR) .jar files the correct type is indeed:
.jar | application/java-archive
|
stackoverflow
|
{
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:841421",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468917"
}
|
7f208e384e7c731636297f0d5c2040d63a79378f
|
Stackoverflow Stackexchange
Q: Xcode 9 Localization Not Working? I have had localization in my app for Danish for a while and when I updated Xcode 9 and ran it in the simulator, everything started to show up in Danish... I have no idea why?
I made sure all my settings were in English and set to the United States but everything appeared in Danish when I ran the app in the simulator. Does anyone know anything about this? Maybe just a beta bug? Thanks!!
A: In addition to what Matusalem suggested (setting the language to English or System Language), also make sure that your development region is English, not Danish, in your application's Info.plist.
If this continues please file it with the bug reporter, and also include the result of a print or NSLog of NSProcessInfo.processInfo.arguments added to your app.
|
Q: Xcode 9 Localization Not Working? I have had localization in my app for Danish for a while and when I updated Xcode 9 and ran it in the simulator, everything started to show up in Danish... I have no idea why?
I made sure all my settings were in English and set to the United States but everything appeared in Danish when I ran the app in the simulator. Does anyone know anything about this? Maybe just a beta bug? Thanks!!
A: In addition to what Matusalem suggested (setting the language to English or System Language), also make sure that your development region is English, not Danish, in your application's Info.plist.
If this continues please file it with the bug reporter, and also include the result of a print or NSLog of NSProcessInfo.processInfo.arguments added to your app.
A: Edit your scheme and make sure "Application Language" is set to "System Language" in the options for "Run".
|
stackoverflow
|
{
"language": "en",
"length": 157,
"provenance": "stackexchange_0000F.jsonl.gz:841439",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44468972"
}
|
f3a5a9f089fa43cfda52572a2be71f87ed60a64c
|
Stackoverflow Stackexchange
Q: Getpreferences not working in fragment Commands like findViewById , getSharedPreferences are not working inside Fragment
I am using kotlin and my code is as follow
fun update (v:View){
Val sharedpref = getSharedPreferences("logindata",Context.MODE_PRIVATE)}
LOG
E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.techno.app, PID: 25691
java.lang.IllegalStateException: Could not find method update(View) in a parent or ancestor Context for android:onClick attribute defined on view class android.support.v7.widget.AppCompatButton
at android.support.v7.app.AppCompatViewInflater$DeclaredOnClickListener.resolveMethod(AppCompatViewInflater.java:327)
at android.support.v7.app.AppCompatViewInflater$DeclaredOnClickListener.onClick(AppCompatViewInflater.java:284)
at android.view.View.performClick(View.java:5721)
at android.widget.TextView.performClick(TextView.java:10936)
at android.view.View$PerformClick.run(View.java:22620)
at android.os.Handler.handleCallback(Handler.java:739)
A: You are calling a Context object in Fragment, Fragment is not a Context.so change this line to something like this:
Val sharedpref = getActivity().getSharedPreferences("logindata",Context.MODE_PRIVATE)}
And use getView method in onCreateView for using findViewById, for example:
TextView tv = (TextView) getView().findViewById(R.id.mtTextview);
|
Q: Getpreferences not working in fragment Commands like findViewById , getSharedPreferences are not working inside Fragment
I am using kotlin and my code is as follow
fun update (v:View){
Val sharedpref = getSharedPreferences("logindata",Context.MODE_PRIVATE)}
LOG
E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.techno.app, PID: 25691
java.lang.IllegalStateException: Could not find method update(View) in a parent or ancestor Context for android:onClick attribute defined on view class android.support.v7.widget.AppCompatButton
at android.support.v7.app.AppCompatViewInflater$DeclaredOnClickListener.resolveMethod(AppCompatViewInflater.java:327)
at android.support.v7.app.AppCompatViewInflater$DeclaredOnClickListener.onClick(AppCompatViewInflater.java:284)
at android.view.View.performClick(View.java:5721)
at android.widget.TextView.performClick(TextView.java:10936)
at android.view.View$PerformClick.run(View.java:22620)
at android.os.Handler.handleCallback(Handler.java:739)
A: You are calling a Context object in Fragment, Fragment is not a Context.so change this line to something like this:
Val sharedpref = getActivity().getSharedPreferences("logindata",Context.MODE_PRIVATE)}
And use getView method in onCreateView for using findViewById, for example:
TextView tv = (TextView) getView().findViewById(R.id.mtTextview);
A: Thou concept wise the above answer (https://stackoverflow.com/a/44469679/3845798) is correct, but there it needs to be in Kotlin. Like getActivity(), getView() will be access like a property.
Also, its val not Val.
Here is simple example of how to use findViewById(), getSharedPreferences() inside the activity.
MainActivity Code -
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
setBaseFragment()
}
private fun setBaseFragment() {
val fragment = MainFragment.newInstance()
supportFragmentManager
.beginTransaction()
.replace(R.id.fragment_container, fragment)
.commit()
}
}
And this is my fragment Class
class MainFragment : Fragment() {
lateinit var show: Button
lateinit var save: Button
lateinit var text: TextView
var prefs: SharedPreferences? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
}
override fun onCreateView(inflater: LayoutInflater?, container: ViewGroup?,
savedInstanceState: Bundle?): View? {
return inflater!!.inflate(R.layout.fragment_main, container, false)
}
override fun onViewCreated(view: View?, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
show = view?.findViewById(R.id.showButton) as Button
save = view?.findViewById(R.id.saveButton) as Button
text = view?.findViewById(R.id.textResult) as TextView
prefs = activity.getSharedPreferences("FUN",MODE_PRIVATE)
save.setOnClickListener {
val editor = prefs!!.edit()
editor.putString("Saving", "This is saveValueFromSharedPref")
editor.apply()
}
show.setOnClickListener {
text.setText(prefs?.getString("Saving","NotSaved"))
}
}
companion object {
fun newInstance(): MainFragment {
val fragment = MainFragment()
return fragment
}
}
}
It's a simple example with a text and two button.
First you have to save and show.
Also, for your app crash you can check this solution.
|
stackoverflow
|
{
"language": "en",
"length": 329,
"provenance": "stackexchange_0000F.jsonl.gz:841454",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469009"
}
|
69a4016af56c1fd7568ca845643e8cd74598aaa1
|
Stackoverflow Stackexchange
Q: In Python, How do I check whether a file exists starting or ending with a substring? I know about os.path.isfile(fname), but now I need to search if a file exists that is named FILEnTEST.txt where n could be any positive integer (so it could be FILE1TEST.txt or FILE9876TEST.txt)
I guess a solution to this could involve substrings that the filename starts/ends with OR one that involves somehow calling os.path.isfile('FILE' + n + 'TEST.txt') and replacing n with any number, but I don't know how to approach either solution.
A: You can use a regular expression:
/FILE\d+TEST.txt/
Example: regexr.com.
Then you can use said regular expression and iterate through all of the files in a directory.
import re
import os
filename_re = 'FILE\d+TEST.txt'
for filename in os.listdir(directory):
if re.search(filename_re, filename):
# this file has the form FILEnTEST.txt
# do what you want with it now
|
Q: In Python, How do I check whether a file exists starting or ending with a substring? I know about os.path.isfile(fname), but now I need to search if a file exists that is named FILEnTEST.txt where n could be any positive integer (so it could be FILE1TEST.txt or FILE9876TEST.txt)
I guess a solution to this could involve substrings that the filename starts/ends with OR one that involves somehow calling os.path.isfile('FILE' + n + 'TEST.txt') and replacing n with any number, but I don't know how to approach either solution.
A: You can use a regular expression:
/FILE\d+TEST.txt/
Example: regexr.com.
Then you can use said regular expression and iterate through all of the files in a directory.
import re
import os
filename_re = 'FILE\d+TEST.txt'
for filename in os.listdir(directory):
if re.search(filename_re, filename):
# this file has the form FILEnTEST.txt
# do what you want with it now
A: You would need to write your own filtering system, by getting all the files in a directory and then matching them to a regex string and seeing if they fail the test or not:
import re
pattern = re.compile("FILE\d+TEST.txt")
dir = "/test/"
for filepath in os.listdir(dir):
if pattern.match(filepath):
#do stuff with matching file
I'm not near a machine with Python installed on it to test the code, but it should be something along those lines.
A: You can also do it as such:
import os
import re
if len([file for file in os.listdir(directory) if re.search('regex', file)]):
# there's at least 1 such file
|
stackoverflow
|
{
"language": "en",
"length": 249,
"provenance": "stackexchange_0000F.jsonl.gz:841465",
"question_score": "14",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469048"
}
|
b83ddee352810d32cc64976e57e68cb1635bf40e
|
Stackoverflow Stackexchange
Q: Use Docker Compose offline by using local images and not pulling images I want to issue the docker-compose command to bring up all the dependent service containers that I have previously pulled down while inside the company network. I am outside the company network so when I try to start my environment the first thing it does is try to call out to the company network and then fails with:
ERROR: Error while pulling image: Get http://myartifactory.service.dev:5000/v1/repositories/my_service/images: dial tcp 127.0.53.53:5000: getsockopt: connection refused
How can I force docker-compose to use the local images and not try to pull down the latest?
A: You can force docker-compose to use local images by first running:
docker-compose pull --ignore-pull-failures
|
Q: Use Docker Compose offline by using local images and not pulling images I want to issue the docker-compose command to bring up all the dependent service containers that I have previously pulled down while inside the company network. I am outside the company network so when I try to start my environment the first thing it does is try to call out to the company network and then fails with:
ERROR: Error while pulling image: Get http://myartifactory.service.dev:5000/v1/repositories/my_service/images: dial tcp 127.0.53.53:5000: getsockopt: connection refused
How can I force docker-compose to use the local images and not try to pull down the latest?
A: You can force docker-compose to use local images by first running:
docker-compose pull --ignore-pull-failures
|
stackoverflow
|
{
"language": "en",
"length": 117,
"provenance": "stackexchange_0000F.jsonl.gz:841489",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469135"
}
|
034ea5eda484add92fce638f23bb45193ba493a7
|
Stackoverflow Stackexchange
Q: Angular: How to cache images? I am developing a web app where I need to cache images so that it does not take time to load if user closes and opens website again.
By opening website again, images should be loaded from cache instead of reloading again.
Is this possible to implement with Angular?
I searched a lot online but I did not find proper solution for that yet.
A:
You can use - HTML manifest Attribute
The manifest attribute specifies the location of the document's cache manifest.
HTML5 introduces application cache, which means that a web application is cached, and accessible without an internet connection.
Application cache gives an application three advantages:
*
*Offline browsing - users can use the application when they're offline
Speed - cached resources load faster
*Reduced server load - the browser will only download updated/changed resources from the server
*The manifest attribute should be included on every page of your web application that you want cached.
The manifest file is a simple text file that lists the resources the browser should cache for offline access.
Detail understanding read this blog: https://www.html5rocks.com/en/tutorials/appcache/beginner/
|
Q: Angular: How to cache images? I am developing a web app where I need to cache images so that it does not take time to load if user closes and opens website again.
By opening website again, images should be loaded from cache instead of reloading again.
Is this possible to implement with Angular?
I searched a lot online but I did not find proper solution for that yet.
A:
You can use - HTML manifest Attribute
The manifest attribute specifies the location of the document's cache manifest.
HTML5 introduces application cache, which means that a web application is cached, and accessible without an internet connection.
Application cache gives an application three advantages:
*
*Offline browsing - users can use the application when they're offline
Speed - cached resources load faster
*Reduced server load - the browser will only download updated/changed resources from the server
*The manifest attribute should be included on every page of your web application that you want cached.
The manifest file is a simple text file that lists the resources the browser should cache for offline access.
Detail understanding read this blog: https://www.html5rocks.com/en/tutorials/appcache/beginner/
|
stackoverflow
|
{
"language": "en",
"length": 188,
"provenance": "stackexchange_0000F.jsonl.gz:841519",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469232"
}
|
6dfa2ea8f8cf2e9eacf379703d8fb47060dd0a5f
|
Stackoverflow Stackexchange
Q: Cannot create directory in hdfs NameNode is in safe mode I upgrade to the latest version of cloudera.Now I am trying to create directory in HDFS
hadoop fs -mkdir data
Am getting the following error
Cannot Create /user/cloudera/data Name Node is in SafeMode.
How can I do this?
A: In addition to Ramesh Maharjan answer, By default, cloudera machine(Cloudera Quick Start#5.12) doesn't allow to SET OFF safe mode, it's required to specify the -u options as shown below:
sudo -u hdfs hdfs dfsadmin -safemode leave
|
Q: Cannot create directory in hdfs NameNode is in safe mode I upgrade to the latest version of cloudera.Now I am trying to create directory in HDFS
hadoop fs -mkdir data
Am getting the following error
Cannot Create /user/cloudera/data Name Node is in SafeMode.
How can I do this?
A: In addition to Ramesh Maharjan answer, By default, cloudera machine(Cloudera Quick Start#5.12) doesn't allow to SET OFF safe mode, it's required to specify the -u options as shown below:
sudo -u hdfs hdfs dfsadmin -safemode leave
A: When you start hadoop, for some time limit hadoop stays in safemode. You can either wait (you can see the time limit being decreased on Namenode web UI) until the time limit or You can turn it off with
hadoop dfsadmin -safemode leave
The above command turns off the safemode of hadoop
A: For me, I was immediately using hive command to go into hive shell after starting hadoop using start-all.sh. I re-tried using hive command after waiting for 10-20 seconds.
A: Might need the full path to hdfs command
/usr/local/hadoop/bin/hdfs dfsadmin -safemode leave
|
stackoverflow
|
{
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:841520",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469234"
}
|
8bd3216373095065e217305e221ed8dbf346381c
|
Stackoverflow Stackexchange
Q: Which is the difference between Link and DelegateLink on Atata framework? I could not figure out from the documentation the difference between Link and LinkDelegate components.
https://atata-framework.github.io/components/#link
Could someone explain on which scenarios would you use each one?
A: The main difference is a usage syntax.
using _ = SamplePage;
public class SamplePage : Page<SamplePage>
{
public Link<_> Save1 { get; private set; }
public LinkDelegate<_> Save2 { get; private set; }
public Link<SamplePage2, _> Navigate1 { get; private set; }
public LinkDelegate<SamplePage2, _> Navigate2 { get; private set; }
}
For internal links, without navigation:
Go.To<SamplePage>().
// To click:
Save1.Click().
Save2(). // As it delegate, use it like a method. Provides shorter syntax.
// To verify:
Save1.Should.Exist().
Save2.Should().Exist(); // Should() is extension method.
For navigation links:
Go.To<SamplePage>().
Navigate1.ClickAndGo();
Go.To<SamplePage>().
Navigate2(); // Shorter syntax.
The same applies to Button and ButtonDelegate.
So, if you often need to call a link/button, and don't verify it's state, you can use delegate option, to keep short call syntax.
|
Q: Which is the difference between Link and DelegateLink on Atata framework? I could not figure out from the documentation the difference between Link and LinkDelegate components.
https://atata-framework.github.io/components/#link
Could someone explain on which scenarios would you use each one?
A: The main difference is a usage syntax.
using _ = SamplePage;
public class SamplePage : Page<SamplePage>
{
public Link<_> Save1 { get; private set; }
public LinkDelegate<_> Save2 { get; private set; }
public Link<SamplePage2, _> Navigate1 { get; private set; }
public LinkDelegate<SamplePage2, _> Navigate2 { get; private set; }
}
For internal links, without navigation:
Go.To<SamplePage>().
// To click:
Save1.Click().
Save2(). // As it delegate, use it like a method. Provides shorter syntax.
// To verify:
Save1.Should.Exist().
Save2.Should().Exist(); // Should() is extension method.
For navigation links:
Go.To<SamplePage>().
Navigate1.ClickAndGo();
Go.To<SamplePage>().
Navigate2(); // Shorter syntax.
The same applies to Button and ButtonDelegate.
So, if you often need to call a link/button, and don't verify it's state, you can use delegate option, to keep short call syntax.
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:841521",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469238"
}
|
cf20a31c9dcd4c6265a6ad2c4f79fca28c34e034
|
Stackoverflow Stackexchange
Q: How to use dagger2 subcomponent? According to official documents:https://google.github.io/dagger/subcomponents.html ,I add a subcomponent in @Module, like this:
@Module(subcomponents = {MainActivityComponent.class})
public class ContextModule {
private Context mContext;
public ContextModule(Context context) {
mContext = context;
}
@Provides
public Context provideContext() {
return mContext;
}
}
And declare my component and subcomponent like this:
@Component(modules = ContextModule.class)
public interface AppComponent {
Context provideContext();
MainActivityComponent getMainActivityComponent();
}
@Subcomponent(modules = {HardwareModule.class, SoftwareModule.class})
public interface MainActivityComponent {
void injectMainActivity(MainActivity activity);
}
But the code can not be compiled successfully. The error is this:
Error:(11, 1) : com.kilnn.dagger2.example.MainActivityComponent doesn't have a @Subcomponent.Builder, which is required when used with @Module.subcomponents
I don't know how to write a @Subcomponent.Builder , and if i remove the subcomponent declare in @Module, everything is ok. So i don't know what is the right way to use subcomponent.
A: Actually, the error is quite descriptive, all you need to do is add the Builder to your Subcomponent like this:
MainActivityComponent.class
@Subcomponent.Builder
interface Builder {
MainActivityComponent build();
}
For your current implementation, and since you don't have special dependencies you don't really need the Subcomponent.
Note: For convention's sake I recommend you to rename your Subcomponent to MainActivitySubcomponent
|
Q: How to use dagger2 subcomponent? According to official documents:https://google.github.io/dagger/subcomponents.html ,I add a subcomponent in @Module, like this:
@Module(subcomponents = {MainActivityComponent.class})
public class ContextModule {
private Context mContext;
public ContextModule(Context context) {
mContext = context;
}
@Provides
public Context provideContext() {
return mContext;
}
}
And declare my component and subcomponent like this:
@Component(modules = ContextModule.class)
public interface AppComponent {
Context provideContext();
MainActivityComponent getMainActivityComponent();
}
@Subcomponent(modules = {HardwareModule.class, SoftwareModule.class})
public interface MainActivityComponent {
void injectMainActivity(MainActivity activity);
}
But the code can not be compiled successfully. The error is this:
Error:(11, 1) : com.kilnn.dagger2.example.MainActivityComponent doesn't have a @Subcomponent.Builder, which is required when used with @Module.subcomponents
I don't know how to write a @Subcomponent.Builder , and if i remove the subcomponent declare in @Module, everything is ok. So i don't know what is the right way to use subcomponent.
A: Actually, the error is quite descriptive, all you need to do is add the Builder to your Subcomponent like this:
MainActivityComponent.class
@Subcomponent.Builder
interface Builder {
MainActivityComponent build();
}
For your current implementation, and since you don't have special dependencies you don't really need the Subcomponent.
Note: For convention's sake I recommend you to rename your Subcomponent to MainActivitySubcomponent
|
stackoverflow
|
{
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:841530",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469261"
}
|
75c15d28cd38b6fcfb3fb4e31fbac45384020429
|
Stackoverflow Stackexchange
Q: Price column object to int in pandas I have a column called amount with holds values that look like this: $3,092.44 when I do dataframe.dtypes() it returns this column as an object how can i convert this column to type int?
A: dataframe["amount"] = dataframe["amount"].str.replace('[\$\,\.]', '').astype(int)
Make Colour Odometer (KM) Doors Price
0 Toyota White 150043 4 $4,000.00
1 Honda Red 87899 4 $5,000.00
2 Toyota Blue 32549 3 $7,000.00
3 BMW Black 11179 5 $22,000.00
4 Nissan White 213095 4 $3,500.00
5 Toyota Green 99213 4 $4,500.00
6 Honda Blue 45698 4 $7,500.00
7 Honda Blue 54738 4 $7,000.00
8 Toyota White 60000 4 $6,250.00
9 Nissan White 31600 4 $9,700.00
car_sales["Price"].dtype
output-dtype('O')
car_sales["Price"]=car_sales["Price"].str.replace('[\$\,\.]', '').astype(int)
car_sales["Price"]
output:
0 400000
1 500000
2 700000
3 2200000
4 350000
5 450000
6 750000
7 700000
8 625000
9 970000
Name: Price, dtype: int32
|
Q: Price column object to int in pandas I have a column called amount with holds values that look like this: $3,092.44 when I do dataframe.dtypes() it returns this column as an object how can i convert this column to type int?
A: dataframe["amount"] = dataframe["amount"].str.replace('[\$\,\.]', '').astype(int)
Make Colour Odometer (KM) Doors Price
0 Toyota White 150043 4 $4,000.00
1 Honda Red 87899 4 $5,000.00
2 Toyota Blue 32549 3 $7,000.00
3 BMW Black 11179 5 $22,000.00
4 Nissan White 213095 4 $3,500.00
5 Toyota Green 99213 4 $4,500.00
6 Honda Blue 45698 4 $7,500.00
7 Honda Blue 54738 4 $7,000.00
8 Toyota White 60000 4 $6,250.00
9 Nissan White 31600 4 $9,700.00
car_sales["Price"].dtype
output-dtype('O')
car_sales["Price"]=car_sales["Price"].str.replace('[\$\,\.]', '').astype(int)
car_sales["Price"]
output:
0 400000
1 500000
2 700000
3 2200000
4 350000
5 450000
6 750000
7 700000
8 625000
9 970000
Name: Price, dtype: int32
A: Here is a simple way to do it:
cars["amount"] = cars["amount"].str.replace("$" , "").str.replace("," , "").astype("float").astype("int")
*
*First you remove the dollar sign
*Next you remove the comma
*Then you convert the column to float. If you try to convert the column straight to integer, you will get the following error: Can only use .str accessor with string values!
*Finally you convert the column to integer
A: You can use Series.replace or Series.str.replace with Series.astype:
dataframe = pd.DataFrame(data={'amount':['$3,092.44', '$3,092.44']})
print (dataframe)
amount
0 $3,092.44
1 $3,092.44
dataframe['amount'] = dataframe['amount'].replace('[\$\,\.]', '', regex=True).astype(int)
print (dataframe)
amount
0 309244
1 309244
dataframe['amount'] = dataframe['amount'].astype(int)
print (dataframe)
amount
0 309244
1 309244
A: in regex \D means not digit... so we can use pd.Series.str.replace
dataframe.amount.replace('\D', '', regex=True).astype(int)
0 309244
1 309244
Name: amount, dtype: int64
A: This is how you do it while also discarding the cents:
car_sales["Price"] = car_sales["Price"].str.replace('[\$\,]|\.\d*', '').astype(int)
A: Assuming your column name is amount, here is what you should do:
dataframe['amount'] = dataframe.amount.str.replace('\$|\.|\,', '').astype(int)
A: If you want to convert a price into string then you can use the below method:
car_sales["Price"] = car_sales["Price"].replace('[\$\,]', '').astype(str)
car_sales["Price"]
0 400000
1 500000
2 700000
3 2200000
4 350000
5 450000
6 750000
7 700000
8 625000
9 970000
Name: Price, dtype: object
A: You can set it to Int by:
df['amount'] = df['amount'].astype(np.int)
If you want to tell Python to read the column as Int at first place, use:
#assuming you're reading from a file
pd.read_csv(file_name, dtype={'amount':np.int32})
A: This will also work: dframe.amount.str.replace("$","").astype(int)
A: This should be simple, just by replacing $, commas(,), and decimals (. dots) with nothing ('') and removing extra zeros, it would work.
your_column_name = your_column_name.str.replace('[\$\,]|\.\d*', '').astype(int)
A: I think using lambda and ignoring $ is also better solution
dollarizer = lambda x: float(x[1:-1])
dataframe.amount = dataframe.amount.apply(dollarizer)
A: To avoid extra ZEROs while converting object to int. you should convert the object ($3,092.440) to float using following code:
Syntax:
your_dataframe["your_column_name"] = your_dataframe["your_column_name"].str.replace('[\$\,]', '').astype(float)
Example:
car_sales["Price"] = car_sales["Price"].replace('[\$\,]', '').astype(float)
Result:
4000.0
A: dataframe["amount"] = dataframe["amount"].str.replace('[$,.]|..$','',regex=True).astype(int)
in str.replace(...)
[$,.] mean find $ , .
| mean or
..$ mean find any last 2 character
so '[$,.]|..$' mean find $ , . or any last 2 character
A: export_car_sales["Price"] = export_car_sales["Price"].replace('[\$\,\.]', '', regex=True).astype(int)
A: Try with this one:
car_sales["Price"] = car_sales["Price"].str.replace('[\$\,]|\.\d*', '').astype(int)
but you have to divide it by 100 to remove the additional zeros that are going to be created, so you will have to run this additional instruction:
car_sales["Price"]=car_sales["Price"].apply(lambda x: x/100)
A: In the above code we have to use float instead of integer so that the cent value would be remain as cents.
df['Price'] = df['Price'].str.replace('[\$\,]','').astype(float)
|
stackoverflow
|
{
"language": "en",
"length": 581,
"provenance": "stackexchange_0000F.jsonl.gz:841553",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469313"
}
|
116730399a7fef3de9308d15446f5618cbff6f9c
|
Stackoverflow Stackexchange
Q: Is Ion-list supposed to automatically add an arrow on the list item? The Ionic 2 documentation makes it seem like the arrow automatically comes with it. It isn't working that way for me however.
https://ionicframework.com/docs/components/#lists
<ion-list>
<ion-item>
<p>Terms Of Use</p>
</ion-item>
<ion-item>
<p>Privacy Policy</p>
</ion-item>
</ion-list>
A: The arrow you're talking about is the Detail arrow (docs). Just like you can see in the docs:
By default, and elements with the ion-item attribute will
display a right arrow icon on ios mode.
And
To hide the right arrow icon on either of these elements, add the
detail-none attribute to the item. To show the right arrow icon on an
element that doesn't display it naturally, add the detail-push
attribute to the item.
Regarding Android and Windows phone,
This feature is not enabled by default for md and wp modes, but it can
be enabled by setting the Sass variables $item-md-detail-push-show and
$item-wp-detail-push-show, respectively, to true. It can also be
disabled for ios by setting $item-ios-detail-push-show to false
So if you want to enable it for android and windows phone, you just need to add the following in your variables.scss file:
$item-md-detail-push-show: true;
$item-wp-detail-push-show: true;
|
Q: Is Ion-list supposed to automatically add an arrow on the list item? The Ionic 2 documentation makes it seem like the arrow automatically comes with it. It isn't working that way for me however.
https://ionicframework.com/docs/components/#lists
<ion-list>
<ion-item>
<p>Terms Of Use</p>
</ion-item>
<ion-item>
<p>Privacy Policy</p>
</ion-item>
</ion-list>
A: The arrow you're talking about is the Detail arrow (docs). Just like you can see in the docs:
By default, and elements with the ion-item attribute will
display a right arrow icon on ios mode.
And
To hide the right arrow icon on either of these elements, add the
detail-none attribute to the item. To show the right arrow icon on an
element that doesn't display it naturally, add the detail-push
attribute to the item.
Regarding Android and Windows phone,
This feature is not enabled by default for md and wp modes, but it can
be enabled by setting the Sass variables $item-md-detail-push-show and
$item-wp-detail-push-show, respectively, to true. It can also be
disabled for ios by setting $item-ios-detail-push-show to false
So if you want to enable it for android and windows phone, you just need to add the following in your variables.scss file:
$item-md-detail-push-show: true;
$item-wp-detail-push-show: true;
|
stackoverflow
|
{
"language": "en",
"length": 195,
"provenance": "stackexchange_0000F.jsonl.gz:841573",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469386"
}
|
9bf20108efdc4ebdafe86c2e8ad49bf8f35ba9fa
|
Stackoverflow Stackexchange
Q: ES6: Filter data with case insensitive term This is how I filter some data by title value:
data.filter(x => x.title.includes(term))
So data like
Sample one
Sample Two
Bla two
will be 'reduced' to
Bla two
if I'm filtering by two.
But I need to get the filtered result
Sample Two
Bla two
A: You can use a case-insensitive regular expression:
// Note that this assumes that you are certain that `term` contains
// no characters that are treated as special characters by a RegExp.
data.filter(x => new RegExp(term, 'i').test(x.title));
A perhaps easier and safer approach is to convert the strings to lowercase and compare:
data.filter(x => x.title.toLowerCase().includes(term.toLowerCase()))
|
Q: ES6: Filter data with case insensitive term This is how I filter some data by title value:
data.filter(x => x.title.includes(term))
So data like
Sample one
Sample Two
Bla two
will be 'reduced' to
Bla two
if I'm filtering by two.
But I need to get the filtered result
Sample Two
Bla two
A: You can use a case-insensitive regular expression:
// Note that this assumes that you are certain that `term` contains
// no characters that are treated as special characters by a RegExp.
data.filter(x => new RegExp(term, 'i').test(x.title));
A perhaps easier and safer approach is to convert the strings to lowercase and compare:
data.filter(x => x.title.toLowerCase().includes(term.toLowerCase()))
|
stackoverflow
|
{
"language": "en",
"length": 108,
"provenance": "stackexchange_0000F.jsonl.gz:841614",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469548"
}
|
86b4e212bd28ee21df8e955c9b9ecc8131e5abd3
|
Stackoverflow Stackexchange
Q: How to draw a linear chord diagram Is there any software or library able to produce a chord diagram like this one (not in circular form)?
|
Q: How to draw a linear chord diagram Is there any software or library able to produce a chord diagram like this one (not in circular form)?
|
stackoverflow
|
{
"language": "en",
"length": 27,
"provenance": "stackexchange_0000F.jsonl.gz:841632",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469603"
}
|
57d02c1e0251ebc451dd1603adbbc8b54cf0af46
|
Stackoverflow Stackexchange
Q: QueryException SQLSTATE[HY000] [1045] Access denied for user 'homestead'@'localhost' (using password: YES) Why is the following error occurring?
QueryException SQLSTATE[HY000] [1045] Access denied for user
'homestead'@'localhost' (using password: YES)
My .env file is as follows:
APP_NAME=Laravel
APP_ENV=local
APP_KEY=base64:P7auDP3AGgfkYLPbu+2/m7RCLc42Sip/HuXLLQFZiYs=
APP_DEBUG=true
APP_LOG_LEVEL=debug
APP_URL=http://localhost
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=student_management
DB_USERNAME=root
DB_PASSWORD=
A: You should clear the cache after changing info in the .env file.
Run the following commands:
php artisan cache:clear
php artisan config:clear
php artisan config:cache
|
Q: QueryException SQLSTATE[HY000] [1045] Access denied for user 'homestead'@'localhost' (using password: YES) Why is the following error occurring?
QueryException SQLSTATE[HY000] [1045] Access denied for user
'homestead'@'localhost' (using password: YES)
My .env file is as follows:
APP_NAME=Laravel
APP_ENV=local
APP_KEY=base64:P7auDP3AGgfkYLPbu+2/m7RCLc42Sip/HuXLLQFZiYs=
APP_DEBUG=true
APP_LOG_LEVEL=debug
APP_URL=http://localhost
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=student_management
DB_USERNAME=root
DB_PASSWORD=
A: You should clear the cache after changing info in the .env file.
Run the following commands:
php artisan cache:clear
php artisan config:clear
php artisan config:cache
A: It's either you edit a wrong file or you have not saved the .env yet. Because in your error message, it looks like you access the database using user homestead on localhost database, not root on student_management. Make sure you edit the correct .env, not .env.example. cheers
A: Add the blank quotes in front of password like this
DB_PASSWORD=""
In the case if you have an empty password.
Hope this solution will work
A: Cleaning out the system caches...for example for the PhpMyStorm IDE
On the main menu, choose File | Invalidate Caches/Restart. The Invalidate Caches message appears informing you that the caches will be invalidated and rebuilt on the next start. Use buttons in the dialog to invalidate caches, restart PhpStorm or both.
A: You can use these commands to solve the problem
php artisan cache:clear
php artisan config:clear
php artisan config:cache
These commands are use to clear the cache.
A: sudo mysql -u root -p
1-CREATE USER '**root**'@'localhost' IDENTIFIED BY '**1234**';
2-GRANT ALL PRIVILEGES ON *.* TO '**root**'@'localhost' WITH GRANT OPTION;
|
stackoverflow
|
{
"language": "en",
"length": 247,
"provenance": "stackexchange_0000F.jsonl.gz:841643",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469647"
}
|
5a1eaedf3a7a774a0cbbcb25b3288c251e2ddc8b
|
Stackoverflow Stackexchange
Q: Workarounds for generic variable in Swift So I have a typealias tuple
public typealias MyTuple<T> = (key: T, value: String)
In my ViewController, I want to declare an array of MyTuple with generic data type as I still don't know the type for key yet. However, from this it is impossible to have a generic-type variable in Swift. There are other workarounds as follows but I don't like either of them. Anyone has better ideas?
class ViewController: UIViewController {
var array1 = [MyTuple<T>]() // compile error of course
var array2 = [MyTuple<Any>]() // no point as I'd use `Any` for MyTuple
func getArray<T>(array: Array<MyTuple<T>>) -> Array<MyTuple<T>> {
return array // not a good approach
}
}
A: I think the usual way to solve this is to “push” the type decision higher up the dependency chain, to the view controller:
class ViewController<T>: UIViewController {
var array: [MyTuple<T>]
}
That makes sense, since you would probably think about the controller as a “foo controller”, where “foo” is the concrete value of T. (A “pet controller”, a “product controller,” etc.) But of course you can’t create an instance of the array until you know the concrete type.
|
Q: Workarounds for generic variable in Swift So I have a typealias tuple
public typealias MyTuple<T> = (key: T, value: String)
In my ViewController, I want to declare an array of MyTuple with generic data type as I still don't know the type for key yet. However, from this it is impossible to have a generic-type variable in Swift. There are other workarounds as follows but I don't like either of them. Anyone has better ideas?
class ViewController: UIViewController {
var array1 = [MyTuple<T>]() // compile error of course
var array2 = [MyTuple<Any>]() // no point as I'd use `Any` for MyTuple
func getArray<T>(array: Array<MyTuple<T>>) -> Array<MyTuple<T>> {
return array // not a good approach
}
}
A: I think the usual way to solve this is to “push” the type decision higher up the dependency chain, to the view controller:
class ViewController<T>: UIViewController {
var array: [MyTuple<T>]
}
That makes sense, since you would probably think about the controller as a “foo controller”, where “foo” is the concrete value of T. (A “pet controller”, a “product controller,” etc.) But of course you can’t create an instance of the array until you know the concrete type.
A: You could do something similar using a protocol for the array declaration and base methods that are not dependent on the data type of the key:
protocol KeyValueArray
{
associatedtype KeyType
var array:[(key:KeyType,value:String)] { get set }
}
extension KeyValueArray
{
var array:[(key: KeyType, value:String)] { get {return []} set { } }
}
class ViewController:UIViewController,KeyValueArray
{
// assuming this is like an "abstact" base class
// that won't actually be instantiated.
typealias KeyType = Any
// you can implement base class functions using the array variable
// as long as they're not dependent on a specific key type.
}
class SpecificVC:ViewController
{
typealias KeyType = Int
var array:[(key:Int,value:String)] = []
}
I'm assuming that, at some point the concrete instances of the view controller subclasses will have an actual type for the keys
|
stackoverflow
|
{
"language": "en",
"length": 331,
"provenance": "stackexchange_0000F.jsonl.gz:841645",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469650"
}
|
161ed25a452611604cea76c7e1fd6c5f103cc40c
|
Stackoverflow Stackexchange
Q: Audio not working on device. Does work on simulator I have created a Sound class and placed a method in my scene file to call the audio when the view begins.
I have also linked the AudioToolbox.framework in the Build Phase.
All runs fine on my simulator but the sound does not play on an actual device.
static let sharedInstance = Sound()
private let music = "filename.mp3"
private var player = AVAudioPlayer()
private var initialized = false
func playMusic() {
let url = URL(fileURLWithPath: Bundle.main.path(forResource: "filename", ofType: nil)!)
do {
player = try AVAudioPlayer(contentsOf: url)
} catch let error as NSError {
NSLog("Error playing music: %@", error)
}
player.numberOfLoops = -1
player.prepareToPlay()
player.play()
initialized = true
}
|
Q: Audio not working on device. Does work on simulator I have created a Sound class and placed a method in my scene file to call the audio when the view begins.
I have also linked the AudioToolbox.framework in the Build Phase.
All runs fine on my simulator but the sound does not play on an actual device.
static let sharedInstance = Sound()
private let music = "filename.mp3"
private var player = AVAudioPlayer()
private var initialized = false
func playMusic() {
let url = URL(fileURLWithPath: Bundle.main.path(forResource: "filename", ofType: nil)!)
do {
player = try AVAudioPlayer(contentsOf: url)
} catch let error as NSError {
NSLog("Error playing music: %@", error)
}
player.numberOfLoops = -1
player.prepareToPlay()
player.play()
initialized = true
}
|
stackoverflow
|
{
"language": "en",
"length": 118,
"provenance": "stackexchange_0000F.jsonl.gz:841649",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469660"
}
|
f5404e3ac579dbcf530132d8d8e245b604e7c017
|
Stackoverflow Stackexchange
Q: Laravel DB::beginTransaction causing other transaction locked In classic php I have 2 process files
1) process1.php => To lock row for .table InnodB.
2) process2.php => To read another row from the same table
I have played with START TRANSACTION AND COMMIT in PHP/MySQL and it is working properly as what I want.
What I want is the process2.php can select another record eventhough the process1.php is still running and locking some row.
Unfortunately it does not work in Laravel where my other process was locked until the first process completed eventhough the second process is selecting another table.
Please help me to figure out this issue.
DB::beginTransaction();
$rs = DB::table('test_iap')->where('content', 'waiting')->lockForUpdate()->get();
$sql = "update test_iap set content='Updated' where id=1";
DB::connection('mysql_pencil_main')->update(DB::raw($sql));
sleep(10);# To allow testing at different file process
DB::commit();
A: use sharedLock() instead-of lockForUpdate(). A shared lock prevents the selected rows from being modified until your transaction commits. see more description here
DB::beginTransaction();
$rs = DB::table('test_iap')->where('content', 'waiting')->sharedLock()->get();
$sql = "update test_iap set content='Updated' where id=1";
DB::connection('mysql_pencil_main')->update(DB::raw($sql));
sleep(10);# To allow testing at different file process
DB::commit();
|
Q: Laravel DB::beginTransaction causing other transaction locked In classic php I have 2 process files
1) process1.php => To lock row for .table InnodB.
2) process2.php => To read another row from the same table
I have played with START TRANSACTION AND COMMIT in PHP/MySQL and it is working properly as what I want.
What I want is the process2.php can select another record eventhough the process1.php is still running and locking some row.
Unfortunately it does not work in Laravel where my other process was locked until the first process completed eventhough the second process is selecting another table.
Please help me to figure out this issue.
DB::beginTransaction();
$rs = DB::table('test_iap')->where('content', 'waiting')->lockForUpdate()->get();
$sql = "update test_iap set content='Updated' where id=1";
DB::connection('mysql_pencil_main')->update(DB::raw($sql));
sleep(10);# To allow testing at different file process
DB::commit();
A: use sharedLock() instead-of lockForUpdate(). A shared lock prevents the selected rows from being modified until your transaction commits. see more description here
DB::beginTransaction();
$rs = DB::table('test_iap')->where('content', 'waiting')->sharedLock()->get();
$sql = "update test_iap set content='Updated' where id=1";
DB::connection('mysql_pencil_main')->update(DB::raw($sql));
sleep(10);# To allow testing at different file process
DB::commit();
A: The problem is from sleep. I tried the same script without any locks or transactions and the other apis still waiting until the 10 seconds end.
Please take a look at this question:
How to sleep PHP(Laravel 5.2) in background.
If you want to let the sleep, you can run another program on another port like:
php artisan serve --port 7000
and send the second request on that that port..
|
stackoverflow
|
{
"language": "en",
"length": 248,
"provenance": "stackexchange_0000F.jsonl.gz:841680",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469780"
}
|
0734e2585b4f1a0819ce1ec80e7bbf2e48c39aea
|
Stackoverflow Stackexchange
Q: React native: DNS Cache issue I have migrated my server to a different host. The IP has changed and now some of the apps in production is not working(not everyone affected). Especially affected those who are using android with version less than 5(lollipop). Is this issue known or is it something else? I am using react-native v0.35.0
My question is somehow related with this issue
From what i understand, this is related to name resolution, and the app is contacting the old server ip. Is there a way to fix this other than rollback to the old server ip?
|
Q: React native: DNS Cache issue I have migrated my server to a different host. The IP has changed and now some of the apps in production is not working(not everyone affected). Especially affected those who are using android with version less than 5(lollipop). Is this issue known or is it something else? I am using react-native v0.35.0
My question is somehow related with this issue
From what i understand, this is related to name resolution, and the app is contacting the old server ip. Is there a way to fix this other than rollback to the old server ip?
|
stackoverflow
|
{
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:841701",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469851"
}
|
d5354deaff8dae71a1e2fa317856bec52b6b0194
|
Stackoverflow Stackexchange
Q: Is the Visual Studio Code Extension Generator broken? I'm following the instructions to build a "hello world" extension for Visual Studio as outlined here. I've got the Yeoman generator installed, but it seems buggy. For one thing, unless I immediately select one of the initial options when generating a new extension, I'm unable to select an option. Further, if I do immediately select an option (i.e. before enter stops working), I'm prompted to give a name to the extension. However, no matter how furiously I pound on my keyboard, no characters shows seem to be registered by the generator application.
Has anyone else experienced these issues with the generator? I'd really like to start experimenting with VS Code extensions, but if the generator doesn't work I'm not sure where to start.
A: This is an issue with yo in node 7.1.0. You have to upgrade or downgrade node.js.
|
Q: Is the Visual Studio Code Extension Generator broken? I'm following the instructions to build a "hello world" extension for Visual Studio as outlined here. I've got the Yeoman generator installed, but it seems buggy. For one thing, unless I immediately select one of the initial options when generating a new extension, I'm unable to select an option. Further, if I do immediately select an option (i.e. before enter stops working), I'm prompted to give a name to the extension. However, no matter how furiously I pound on my keyboard, no characters shows seem to be registered by the generator application.
Has anyone else experienced these issues with the generator? I'd really like to start experimenting with VS Code extensions, but if the generator doesn't work I'm not sure where to start.
A: This is an issue with yo in node 7.1.0. You have to upgrade or downgrade node.js.
|
stackoverflow
|
{
"language": "en",
"length": 149,
"provenance": "stackexchange_0000F.jsonl.gz:841716",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469918"
}
|
bda6ece6425abec163e48a2fdedc2fb0cb7e012d
|
Stackoverflow Stackexchange
Q: How to change woo-commerce product category url? I have set up WordPress WooCommerce website. Default product category URL is like
http://localhost/woocommerce/product-category/test/.
But I want to change this URL like http://localhost/woocommerce/test/product-category.
Is it possible to change the category URL format into which I want?
A: You can do this in Settings > Permalinks.
If you only need to customize category archives URL, the right parameter to change is Product Category Base
If you want to customize final product URL you can change parameter in Custom Base and insert your own slug, or slug + product_cat placeholder link in this example : /test/%product_cat%/
Thank save the changes of course.
Cheers,
Francesco
|
Q: How to change woo-commerce product category url? I have set up WordPress WooCommerce website. Default product category URL is like
http://localhost/woocommerce/product-category/test/.
But I want to change this URL like http://localhost/woocommerce/test/product-category.
Is it possible to change the category URL format into which I want?
A: You can do this in Settings > Permalinks.
If you only need to customize category archives URL, the right parameter to change is Product Category Base
If you want to customize final product URL you can change parameter in Custom Base and insert your own slug, or slug + product_cat placeholder link in this example : /test/%product_cat%/
Thank save the changes of course.
Cheers,
Francesco
A: Under WordPress Settings->Permalinks
https://example.com/wp-admin/options-permalink.php
|
stackoverflow
|
{
"language": "en",
"length": 115,
"provenance": "stackexchange_0000F.jsonl.gz:841734",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44469970"
}
|
d138f2516db5aa5cc9dfab70982307fd84655389
|
Stackoverflow Stackexchange
Q: Aligning an Image and text inline I am trying to simply put an image inline with some text so that the text and image appear beside each other. I'm using display:inline; but it dosen't seem to be working. Here is my code:
<div class="design-image" style="display:inline;">
<img src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs" style="display:inline;">
<p>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
A: Alternatively use a flexbox:
img {
width: 300px;
height: auto;
}
p {
margin:0;
}
.fb {
display: flex;
}
.programs, .design-image {
padding: 1em;
}
<div class='fb'>
<div class="design-image">
<img src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs">
<p>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
</div>
|
Q: Aligning an Image and text inline I am trying to simply put an image inline with some text so that the text and image appear beside each other. I'm using display:inline; but it dosen't seem to be working. Here is my code:
<div class="design-image" style="display:inline;">
<img src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs" style="display:inline;">
<p>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
A: Alternatively use a flexbox:
img {
width: 300px;
height: auto;
}
p {
margin:0;
}
.fb {
display: flex;
}
.programs, .design-image {
padding: 1em;
}
<div class='fb'>
<div class="design-image">
<img src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs">
<p>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
</div>
A: To get the desired effect, you should use the float property. These change the way that elements are added to the browser window. Here is an example of what they can do for you:
div {
display: inline;
}
#pic {
float: left;
/* setting these so you know where the image would be */
width: 200px;
height: 200px;
background-color: red;
margin-right: 50px;
}
#blah {
float: left;
width: 100px;
}
<div id="pic">
Image would go here
</div>
<div id="blah">
<p>This is a short description referencing the image in question.</p>
</div>
A: Hi first of all take a div before img tag and give him width and do float right.
see the code
<div>
<p> aking the approach of truly designing programs from ground up, Northman Underwrites each individual
to reflect the unique exposure of an extraordinary life.
<div style="width:300px;float:right; padding:10px;"><img src="insert your image path"></div></p>
</div>
A: Try This:
.design-image {
float: left;
width: 50%; /*/ Or other Value /*/
}
img {
width: 100%;
}
<div class="design-image"">
<img src="http://www.mrwallpaper.com/wallpapers/cute-bunny-1600x900.jpg">
</div>
<div class="programs">
<p>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
A: What you want to do can be done using float and giving width to div and setting style for the image and paragraph tags.
Code below can help you achieve what you want
<div class="design-image" style="width: 50%; float: left;">
<img style="width: 100%;" src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs" style="width: 50%; float: left;">
<p style="padding: 0 20px; margin:0;">Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
A: You can align those elements with different CSS attributes, I just show you some examples.
To achieve your objective you can use float, or display inline-block or table-cell (not used by anyone, but good to know), you can use too flexbox, but it is in another answer so I didn't added it here.
Remember that "divs" are block elements, so in most cases it's wise to use inline-block than just inline. Inline-block will give you the advantage of inline property, but will keep the capacity to use vertical margin/padding (top, bottom).
jsFiddle here
<div class="method method-float">
<div class="design-image">
<img src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs">
<p>Method float <br>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
</div>
<div class="method method-inline-block">
<div class="design-image">
<img src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs">
<p>Method inline-block <br>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
</div>
<div class="method method-table-cell">
<div class="design-image">
<img src="https://s29.postimg.org/taqtdfe7r/image1.png">
</div>
<div class="programs">
<p>Method display table cell (not used, but interesting to know technique) <br>Taking the approach of truly designing programs from ground up, Northman Underwrites each individual to reflect the unique exposure of an extraordinary life. </p>
</div>
</div>
CSS
img {
width: 100%;
height: auto;
}
.method-float {
overflow: hidden;
}
.method-float .design-image {
float: left;
width: 50%;
}
.method-float .programs {
float: left;
width: 50%;
}
.method-inline-block {
font-size: 0;
}
.method-inline-block .design-image {
display: inline-block;
width: 50%;
vertical-align: top;
}
.method-inline-block .programs {
display: inline-block;
width: 50%;
vertical-align: top;
font-size: 16px;
}
.method-table-cell .design-image {
display: table-cell;
width: 1%;
vertical-align: top;
}
.method-table-cell .programs {
display: table-cell;
width: 1%;
vertical-align: top;
}
|
stackoverflow
|
{
"language": "en",
"length": 711,
"provenance": "stackexchange_0000F.jsonl.gz:841779",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470111"
}
|
7da594da535e181e892802b760ce3e54cf062ca3
|
Stackoverflow Stackexchange
Q: Keep values from SQL 'IN' Operator How to modify query
SELECT field1, field2 FROM table1 WHERE field1 IN ('value1', 'value2', 'value3');
to get NULL as field2 if 'value2' not found?
Table1:
field1 | field2
-------+--------
value1 | result1
value3 | result3
value4 | result4
Current output:
field1 | field2
-------+--------
value1 | result1
value3 | result3
Expected output:
field1 | field2
-------+-------
value1 | result1
value2 | NULL
value3 | result3
A: Use an LEFT JOIN against a table containing the 'full output set'. The IN clause (really, any condition in WHERE) can only filter/remove results: it can never add new records.
Depends on flavor of SQL; an example:
SELECT
fullSet.field1,
t.field2
FROM (SELECT 'value1' as field1 -- set of rows value1..4
UNION ALL
SELECT 'value2'
UNION ALL
SELECT 'value3'
UNION ALL
SELECT 'value4') fullSet
LEFT JOIN table1 t -- join to access t.field2 (null if no match)
ON t.field1 = fullSet.field1
WHERE t.field1 IN ('value1', 'value2', 'value3'); -- filtered value4
Different SQL dialects may provide more convenient methods of building up the entire result set space (eg. CTEs in SQL Server).
|
Q: Keep values from SQL 'IN' Operator How to modify query
SELECT field1, field2 FROM table1 WHERE field1 IN ('value1', 'value2', 'value3');
to get NULL as field2 if 'value2' not found?
Table1:
field1 | field2
-------+--------
value1 | result1
value3 | result3
value4 | result4
Current output:
field1 | field2
-------+--------
value1 | result1
value3 | result3
Expected output:
field1 | field2
-------+-------
value1 | result1
value2 | NULL
value3 | result3
A: Use an LEFT JOIN against a table containing the 'full output set'. The IN clause (really, any condition in WHERE) can only filter/remove results: it can never add new records.
Depends on flavor of SQL; an example:
SELECT
fullSet.field1,
t.field2
FROM (SELECT 'value1' as field1 -- set of rows value1..4
UNION ALL
SELECT 'value2'
UNION ALL
SELECT 'value3'
UNION ALL
SELECT 'value4') fullSet
LEFT JOIN table1 t -- join to access t.field2 (null if no match)
ON t.field1 = fullSet.field1
WHERE t.field1 IN ('value1', 'value2', 'value3'); -- filtered value4
Different SQL dialects may provide more convenient methods of building up the entire result set space (eg. CTEs in SQL Server).
A: If you are using Postgres, then I would write this as:
SELECT v.field1, t1.field2
FROM (VALUES ('value1'), ('value2'), ('value3')) v(field1) LEFT JOIN
table1 t1
ON t1.field1 = v.field1;
If you are using MySQL, the answer is similar, but the construct fort he first table looks a bit different:
SELECT v.field1, t1.field2
FROM (SELECT 'value1' as field1 UNION ALL
SELECT 'value2' as field1 UNION ALL
SELECT 'value3' as field1
) v LEFT JOIN
table1 t1
ON t1.field1 = v.field1;
In neither case do you need a WHERE clause.
|
stackoverflow
|
{
"language": "en",
"length": 271,
"provenance": "stackexchange_0000F.jsonl.gz:841783",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470130"
}
|
27fb049ad174e857cf8ee79ec347ca7362421d16
|
Stackoverflow Stackexchange
Q: Cloudinary python uploader not working import cloudinary
cloudinary.uploader.upload("my_picture.jpg")
Gives error
AttributeError: module 'cloudinary' has no attribute 'uploader'
A: I solved this by also adding this import statement:
import cloudinary.uploader
|
Q: Cloudinary python uploader not working import cloudinary
cloudinary.uploader.upload("my_picture.jpg")
Gives error
AttributeError: module 'cloudinary' has no attribute 'uploader'
A: I solved this by also adding this import statement:
import cloudinary.uploader
A: It usually happens when running it from a script called Cloudinary. Try to rename cloudinary.py to something else.. Also, delete cloudinary.pyc if exists.
|
stackoverflow
|
{
"language": "en",
"length": 54,
"provenance": "stackexchange_0000F.jsonl.gz:841802",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470188"
}
|
7469c99e350bf1915726d00354ab2e995a009a91
|
Stackoverflow Stackexchange
Q: Why does Angular let you inject the $provide service into config blocks? As per Angular documentation, we can only inject Providers (not instances) in config blocks.
https://docs.angularjs.org/guide/module#module-loading-dependencies
But contrary to this Angular lets you inject $provide or $inject in spite of them being singleton service instances.
https://docs.angularjs.org/api/auto/service/$provide
A: This got me curious so I did some research. Here is what I found:
*
*$injector cannot be injected into config blocks
*$provide can be injected into config blocks
In code, the reason for 2 is that $provide is put into the providerCache before the providerInjector (the injector used in config blocks) is created. This ensures that it will always be a known provider to the providerInjector. https://github.com/angular/angular.js/blob/master/src/auto/injector.js#L671
That said, I do agree that being able inject $provide into config blocks seems to contradict the general rule regarding what can be injected into configuration blocks stated here: https://docs.angularjs.org/guide/module#module-loading-dependencies
Even though it is clearly demonstrated to be something you can do here:
https://docs.angularjs.org/guide/module#configuration-blocks
$provide might just be the one exception to the general rule.
|
Q: Why does Angular let you inject the $provide service into config blocks? As per Angular documentation, we can only inject Providers (not instances) in config blocks.
https://docs.angularjs.org/guide/module#module-loading-dependencies
But contrary to this Angular lets you inject $provide or $inject in spite of them being singleton service instances.
https://docs.angularjs.org/api/auto/service/$provide
A: This got me curious so I did some research. Here is what I found:
*
*$injector cannot be injected into config blocks
*$provide can be injected into config blocks
In code, the reason for 2 is that $provide is put into the providerCache before the providerInjector (the injector used in config blocks) is created. This ensures that it will always be a known provider to the providerInjector. https://github.com/angular/angular.js/blob/master/src/auto/injector.js#L671
That said, I do agree that being able inject $provide into config blocks seems to contradict the general rule regarding what can be injected into configuration blocks stated here: https://docs.angularjs.org/guide/module#module-loading-dependencies
Even though it is clearly demonstrated to be something you can do here:
https://docs.angularjs.org/guide/module#configuration-blocks
$provide might just be the one exception to the general rule.
|
stackoverflow
|
{
"language": "en",
"length": 172,
"provenance": "stackexchange_0000F.jsonl.gz:841828",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470294"
}
|
368851c68c86856afda1a8d4af035f31c3b72f8c
|
Stackoverflow Stackexchange
Q: Using Typescript with Realm JS I'm using Realm for react-native. They say if you want to use schemas on objects, you would do something like:
class Person {
[...]
}
Person.schema = PersonSchema;
// Note here we are passing in the `Person` constructor
let realm = new Realm({schema: [CarSchema, Person]});
I want to use Typescript in my projects. The type definition for schema is ObjectClass[] where ObjectClass is defined as:
interface ObjectClass {
schema: ObjectSchema;
}
interface ObjectSchema {
name: string;
primaryKey?: string;
properties: PropertiesTypes;
}
[ omitted the rest b/c that's not where the type fails ]
So, I defined my class:
class MyApp implements ObjectClass{
schema: { name: 'int', properties: { version: 'int' } }
}
But, the following fails:
let realm = new Realm({schema: [MyApp]})
Argument of type 'typeof MyApp' is not assignable to parameter of type 'ObjectClass'. Property 'schema' is missing in type 'typeof MyApp'.
A: The schema property on MyApp should be static (and this means you won't be able to implement the interface ObjectClass):
class MyApp {
static schema = { name: 'int', properties: { version: 'int' } };
}
|
Q: Using Typescript with Realm JS I'm using Realm for react-native. They say if you want to use schemas on objects, you would do something like:
class Person {
[...]
}
Person.schema = PersonSchema;
// Note here we are passing in the `Person` constructor
let realm = new Realm({schema: [CarSchema, Person]});
I want to use Typescript in my projects. The type definition for schema is ObjectClass[] where ObjectClass is defined as:
interface ObjectClass {
schema: ObjectSchema;
}
interface ObjectSchema {
name: string;
primaryKey?: string;
properties: PropertiesTypes;
}
[ omitted the rest b/c that's not where the type fails ]
So, I defined my class:
class MyApp implements ObjectClass{
schema: { name: 'int', properties: { version: 'int' } }
}
But, the following fails:
let realm = new Realm({schema: [MyApp]})
Argument of type 'typeof MyApp' is not assignable to parameter of type 'ObjectClass'. Property 'schema' is missing in type 'typeof MyApp'.
A: The schema property on MyApp should be static (and this means you won't be able to implement the interface ObjectClass):
class MyApp {
static schema = { name: 'int', properties: { version: 'int' } };
}
|
stackoverflow
|
{
"language": "en",
"length": 187,
"provenance": "stackexchange_0000F.jsonl.gz:841843",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470349"
}
|
95b996328b57b095bbb3b8eec8004fb39bc54630
|
Stackoverflow Stackexchange
Q: I got this message (CANNOT_GROUP_WITHOUT_AGG) from a simple query I how to fix this error message?
Unable to parse query string for Function QUERY parameter 2:
CANNOT_GROUP_WITHOUT_AGG
I got that error message just from a simple query formula, I already tried to search about it and try with the curly bracket { ... } but it doesn't fixed,
Can anyone help me or ever experienced it?
=QUERY(ANSWER!C:C, "SELECT * GROUP BY C", 0)
A: If you don't have an agreggation function (such as sum, avg, count in SELECT), there is no use for GROUP BY - you may just delete it.
If you wish to present unique records, use distinct instead.
|
Q: I got this message (CANNOT_GROUP_WITHOUT_AGG) from a simple query I how to fix this error message?
Unable to parse query string for Function QUERY parameter 2:
CANNOT_GROUP_WITHOUT_AGG
I got that error message just from a simple query formula, I already tried to search about it and try with the curly bracket { ... } but it doesn't fixed,
Can anyone help me or ever experienced it?
=QUERY(ANSWER!C:C, "SELECT * GROUP BY C", 0)
A: If you don't have an agreggation function (such as sum, avg, count in SELECT), there is no use for GROUP BY - you may just delete it.
If you wish to present unique records, use distinct instead.
|
stackoverflow
|
{
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:841866",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470434"
}
|
fe7b3e7f3a84ae48216facc5f17a1ae8fc198501
|
Stackoverflow Stackexchange
Q: md-autocomplete with multiselect check box and two way binding I'm trying to implement autocomplete dropdown with multiselect option check box,
so when user search a text it should display the items with check box options. And the user can select multiple items on that.
Once the user select the items, it should display the list of selected items below the dropdown with delete options.
The selected items are stored in an object array.
I did this with the below combination,
<md-autocomplete class="md-primary" md-no-cache="true"
md-selected-item="vm.selectedItem"
md-search-text="vm.searchText"
md-items="item in vm.find(vm.searchText)"
md-item-text="item.Name"
md-min-length="3"
placeholder="Type here to search...">
<md-item-template>
<md-checkbox ng-click="vm.ItemSelected(item, $event)" stop-event></md-checkbox>
<div>
<span class="item-title">
<span md-highlight-text="vm.searchText">{{item.Name}}</span>
</span>
</div>
</md-item-template>
<md-not-found>
No items were found.
</md-not-found>
</md-autocomplete>
Using directive "stop-event" with event.stopPropagation() to prevent dropdown from collapsing after click.
But what the issue is, i can't bind the search result items with the already selected items.
There is no option to mark as checked from the search list if the item is already selected.
I've searched for the fixes, all were said to use the md-chips, but i want to do it with the md-autocomplete.
Can someone help me on this?
|
Q: md-autocomplete with multiselect check box and two way binding I'm trying to implement autocomplete dropdown with multiselect option check box,
so when user search a text it should display the items with check box options. And the user can select multiple items on that.
Once the user select the items, it should display the list of selected items below the dropdown with delete options.
The selected items are stored in an object array.
I did this with the below combination,
<md-autocomplete class="md-primary" md-no-cache="true"
md-selected-item="vm.selectedItem"
md-search-text="vm.searchText"
md-items="item in vm.find(vm.searchText)"
md-item-text="item.Name"
md-min-length="3"
placeholder="Type here to search...">
<md-item-template>
<md-checkbox ng-click="vm.ItemSelected(item, $event)" stop-event></md-checkbox>
<div>
<span class="item-title">
<span md-highlight-text="vm.searchText">{{item.Name}}</span>
</span>
</div>
</md-item-template>
<md-not-found>
No items were found.
</md-not-found>
</md-autocomplete>
Using directive "stop-event" with event.stopPropagation() to prevent dropdown from collapsing after click.
But what the issue is, i can't bind the search result items with the already selected items.
There is no option to mark as checked from the search list if the item is already selected.
I've searched for the fixes, all were said to use the md-chips, but i want to do it with the md-autocomplete.
Can someone help me on this?
|
stackoverflow
|
{
"language": "en",
"length": 189,
"provenance": "stackexchange_0000F.jsonl.gz:841889",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470536"
}
|
3f03c603c32097227829494aa15cf0b2a952cfc0
|
Stackoverflow Stackexchange
Q: Got "too many positional arguments" error on mongorestore I am trying to restore a existing mongodb database data..when i restore that by command line i got this line as error
2017-06-10T12:27:55.474+0530 too many positional arguments
2017-06-10T12:27:55.476+0530 try 'mongorestore --help' for more information
I used this line
C:\Program Files\MongoDB\Server\3.4\bin> mongorestore F:\mongo_db\db
Anyone please help me to get rid from this error
A: I found the solution after some time. Alternatively, we don't want to point the mongodb folder as of my question C:\Program Files\MongoDB\Server\3.4\bin> .
Simply Used this command . It restores the existing database, else creates the database if it's not existing.here,
mongorestore --host <database-host> -d <database-name> --port <database-port> foldername
Don't forgot to start mongodb server before use this command. For your localhost
database-host --> localhost
database-name --> Your db Name
database-port --> 27017
|
Q: Got "too many positional arguments" error on mongorestore I am trying to restore a existing mongodb database data..when i restore that by command line i got this line as error
2017-06-10T12:27:55.474+0530 too many positional arguments
2017-06-10T12:27:55.476+0530 try 'mongorestore --help' for more information
I used this line
C:\Program Files\MongoDB\Server\3.4\bin> mongorestore F:\mongo_db\db
Anyone please help me to get rid from this error
A: I found the solution after some time. Alternatively, we don't want to point the mongodb folder as of my question C:\Program Files\MongoDB\Server\3.4\bin> .
Simply Used this command . It restores the existing database, else creates the database if it's not existing.here,
mongorestore --host <database-host> -d <database-name> --port <database-port> foldername
Don't forgot to start mongodb server before use this command. For your localhost
database-host --> localhost
database-name --> Your db Name
database-port --> 27017
A: This error may occur when your folder name have space or hyphen(-) nomenclature, Please check the database contain folder name that it should not have any space or hyphen(-) in its nomenclature before restoring the database.
Below are used and tested cmd command line query for mongodb database restore
C:\Program Files\MongoDB\Server\4.0\bin>mongorestore -d foldername folderpath
Foldername - Name of the folder where your database dump exist.
Folderpath - A complete path(url) of your system where it database dump exist.
|
stackoverflow
|
{
"language": "en",
"length": 214,
"provenance": "stackexchange_0000F.jsonl.gz:841916",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470621"
}
|
e6697e8e2c9bb95cb45cd8de082633b31a4e4285
|
Stackoverflow Stackexchange
Q: How can I send object and page id while routing without appending it in URL in Angular 2? I want to send object and page id from one component to another component. I don't want to append these data in URL.
My Approach :-
this.router.navigate(['./segment-details'], { queryParams: { pageId: this.pageId, data : JSON.stringify(cardData) } });
|
Q: How can I send object and page id while routing without appending it in URL in Angular 2? I want to send object and page id from one component to another component. I don't want to append these data in URL.
My Approach :-
this.router.navigate(['./segment-details'], { queryParams: { pageId: this.pageId, data : JSON.stringify(cardData) } });
|
stackoverflow
|
{
"language": "en",
"length": 56,
"provenance": "stackexchange_0000F.jsonl.gz:841918",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470634"
}
|
277215172749984d2a700dd156a9c8b208f74a7f
|
Stackoverflow Stackexchange
Q: Alternative to SecureString on a .net Standard library? I have a .net framework 4.7 library with extensions methods, one of them use a securestring, but I would like to convert this library to .net Standard 1.6. However, It is not possible to use the SecureString type.
So I would like to know if there are alternatives to SecureString in .net Standard.
Thanks.
A: SecureString is available for .NET Core via the System.Security.SecureString NuGet package. This NuGet package currently cannot be used in .NET Standard libraries as its implementation is specific to .NET Core. To use this package, you have to create a .NET Core library.
In .NET Standard 2.0, this type will be available without the need to add a NuGet package and you'll be able to use it in a .NET Standard library.
|
Q: Alternative to SecureString on a .net Standard library? I have a .net framework 4.7 library with extensions methods, one of them use a securestring, but I would like to convert this library to .net Standard 1.6. However, It is not possible to use the SecureString type.
So I would like to know if there are alternatives to SecureString in .net Standard.
Thanks.
A: SecureString is available for .NET Core via the System.Security.SecureString NuGet package. This NuGet package currently cannot be used in .NET Standard libraries as its implementation is specific to .NET Core. To use this package, you have to create a .NET Core library.
In .NET Standard 2.0, this type will be available without the need to add a NuGet package and you'll be able to use it in a .NET Standard library.
|
stackoverflow
|
{
"language": "en",
"length": 135,
"provenance": "stackexchange_0000F.jsonl.gz:841929",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470664"
}
|
5af498ff6ec80727e9320647ad4f0606ae79d755
|
Stackoverflow Stackexchange
Q: How to migrate existing table using Django and Python I need one help. I have one existing mysql table in my localhost database and I need it to migrate using Django and Python. Here is my code:
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'djangotest',
'USER': 'root',
'PASSWORD': '',
'HOST': '',
'PORT': '',
}
}
I am giving my table structure below.
Person:
id name phone age
models.py:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
# Create your models here.
class Person(models.Model):
name = models.CharField(max_length=200)
phone = models.CharField(max_length=15)
age = models.IntegerField()
Actually I am new to Django and Python and here I need to know command which can migrate the existing table.
A: to create migrations you need to use this command -
python manage.py makemigrations
the above command will create a file inside the migrations folder in your app directory and
to create/update table using the migration file in the database
python manage.py migrate
The above command will create/update a table in your DB.
Django Migration Docmentation
Let me know, if this is what you want!
|
Q: How to migrate existing table using Django and Python I need one help. I have one existing mysql table in my localhost database and I need it to migrate using Django and Python. Here is my code:
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'djangotest',
'USER': 'root',
'PASSWORD': '',
'HOST': '',
'PORT': '',
}
}
I am giving my table structure below.
Person:
id name phone age
models.py:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
# Create your models here.
class Person(models.Model):
name = models.CharField(max_length=200)
phone = models.CharField(max_length=15)
age = models.IntegerField()
Actually I am new to Django and Python and here I need to know command which can migrate the existing table.
A: to create migrations you need to use this command -
python manage.py makemigrations
the above command will create a file inside the migrations folder in your app directory and
to create/update table using the migration file in the database
python manage.py migrate
The above command will create/update a table in your DB.
Django Migration Docmentation
Let me know, if this is what you want!
A: In reference to akhilsp, I did not have to worry about table names using the _ format. Using the inspectdb command returned a Meta data for whatever the current table name is that I used in my model.
class Meta:
managed = False
db_table = 'malware'
A: You can use inspectdb command from the shell.
python manage.py inspectdb
This will print models corresponding to the current database structure. You copy the required model, make changes like adding validations, and then add other models.
python manage.py makemigrations will create migrations, and
python manage.py migrate will apply those migrations.
N.B: Your table name should be in the format "appname_modelname" where appname is name for django app name (not project name).
A: Add --fake option to migrate command:
--fake
Tells Django to mark the migrations as having been applied or unapplied, but without actually running the SQL to change your
database schema.
This is intended for advanced users to manipulate the current
migration state directly if they’re manually applying changes; be
warned that using --fake runs the risk of putting the migration state
table into a state where manual recovery will be needed to make
migrations run correctly.
If you just start django project and didn't have initial migration: comment your Person model, make initial migration, apply it, uncomment Person, make migration for Person and at last migrate Person with --fake option.
|
stackoverflow
|
{
"language": "en",
"length": 416,
"provenance": "stackexchange_0000F.jsonl.gz:841951",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470715"
}
|
a4cc7a69fe38bfcb9287e964655caca093220a97
|
Stackoverflow Stackexchange
Q: Hourly average for each week/month in dataframe (moving average) I have a dataframe with full year data of values on each second:
YYYY-MO-DD HH-MI-SS_SSS TEMPERATURE (C)
2016-09-30 23:59:55.923 28.63
2016-09-30 23:59:56.924 28.61
2016-09-30 23:59:57.923 28.63
... ...
2017-05-30 23:59:57.923 30.02
I want to create a new dataframe which takes each week or month of values and average them over the same hour of each day (kind of moving average but for each hour).
So the result for the month case will be like this:
Date TEMPERATURE (C)
2016-09 00:00:00 28.63
2016-09 01:00:00 27.53
2016-09 02:00:00 27.44
...
2016-10 00:00:00 28.61
... ...
I'm aware of the fact that I can split the df into 12 df's for each month and use:
hour = pd.to_timedelta(df['YYYY-MO-DD HH-MI-SS_SSS'].dt.hour, unit='H')
df2 = df.groupby(hour).mean()
But I'm searching for a better and faster way.
Thanks !!
A: Here's an alternate method of converting your date and time columns:
df['datetime'] = pd.to_datetime(df['YYYY-MO-DD'] + ' ' + df['HH-MI-SS_SSS'])
Additionally you could groupby both week and hour to form a MultiIndex dataframe (instead of creating and managing 12 dfs):
df.groupby([df.datetime.dt.weekofyear, df.datetime.dt.hour]).mean()
|
Q: Hourly average for each week/month in dataframe (moving average) I have a dataframe with full year data of values on each second:
YYYY-MO-DD HH-MI-SS_SSS TEMPERATURE (C)
2016-09-30 23:59:55.923 28.63
2016-09-30 23:59:56.924 28.61
2016-09-30 23:59:57.923 28.63
... ...
2017-05-30 23:59:57.923 30.02
I want to create a new dataframe which takes each week or month of values and average them over the same hour of each day (kind of moving average but for each hour).
So the result for the month case will be like this:
Date TEMPERATURE (C)
2016-09 00:00:00 28.63
2016-09 01:00:00 27.53
2016-09 02:00:00 27.44
...
2016-10 00:00:00 28.61
... ...
I'm aware of the fact that I can split the df into 12 df's for each month and use:
hour = pd.to_timedelta(df['YYYY-MO-DD HH-MI-SS_SSS'].dt.hour, unit='H')
df2 = df.groupby(hour).mean()
But I'm searching for a better and faster way.
Thanks !!
A: Here's an alternate method of converting your date and time columns:
df['datetime'] = pd.to_datetime(df['YYYY-MO-DD'] + ' ' + df['HH-MI-SS_SSS'])
Additionally you could groupby both week and hour to form a MultiIndex dataframe (instead of creating and managing 12 dfs):
df.groupby([df.datetime.dt.weekofyear, df.datetime.dt.hour]).mean()
|
stackoverflow
|
{
"language": "en",
"length": 183,
"provenance": "stackexchange_0000F.jsonl.gz:842012",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470932"
}
|
0e65b3f57d03ec85e1b08098105a570901e68e9f
|
Stackoverflow Stackexchange
Q: Angular 2 CLI - How to use /dist folder files created by angular-cli ng build I'm generating dist folder after ng build and my directory looks like
C:\Source\angular> ng build
cut and past the dist folder in another directory
C:\ReSource\angularbuild
After changing in Index.html to
<base href="./ReSource/angularbuild/dist">
Then
C:\Source\angular> ng serve
Getting inline.bundle.js,main.bundle.js,styles.bundle.js,vendor.bundle.js,main.bundle.js 404 not found errors
How could i achieve it ? I want to run the dist folder which is placed in angularbuild folder from C:\Source\angular>
Let me know the right way to do it.
A: After ng build, it just needs a server to execute it, instead of ng serve you can install an http-server from your terminal npm i -g http-server --save then execute your project from the dist folder using the command (y) :
http-server ./dist
Or on apache for who are using Wamp or Xamp for example just copy the entire files on the dist folder inside your www folder, then restart Apache service
|
Q: Angular 2 CLI - How to use /dist folder files created by angular-cli ng build I'm generating dist folder after ng build and my directory looks like
C:\Source\angular> ng build
cut and past the dist folder in another directory
C:\ReSource\angularbuild
After changing in Index.html to
<base href="./ReSource/angularbuild/dist">
Then
C:\Source\angular> ng serve
Getting inline.bundle.js,main.bundle.js,styles.bundle.js,vendor.bundle.js,main.bundle.js 404 not found errors
How could i achieve it ? I want to run the dist folder which is placed in angularbuild folder from C:\Source\angular>
Let me know the right way to do it.
A: After ng build, it just needs a server to execute it, instead of ng serve you can install an http-server from your terminal npm i -g http-server --save then execute your project from the dist folder using the command (y) :
http-server ./dist
Or on apache for who are using Wamp or Xamp for example just copy the entire files on the dist folder inside your www folder, then restart Apache service
A: If you deploy the angular4 project on you webserver to sub folder DOCUMENT_ROOT/test, then you can do the build as follows:
ng build --prod --base-href "/test/".
Copy the dist/* files to DOCUMENT_ROOT/test.
Access the app through: http://myserver/test. That worked for me.
A: Dist folder is not for ng serve
It's a build that you can run without ng commands
ng build :
It creates the build of your project , converts all your ".ts files" and other files to the simple js files that browser can understand.
So there is no need to run ng serve over the dist folder ,
just open index.html file inside dist folder and your whole project
will run.
A: Steps to create build is
> ng build --prod // can be other environment
To serve /dist folder created by angular-cli ng build command we can use "serve"
use below command to install serve
> yarn global add serve
and run
serve dist/
You will get url and try it on any browser.
|
stackoverflow
|
{
"language": "en",
"length": 330,
"provenance": "stackexchange_0000F.jsonl.gz:842030",
"question_score": "29",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44470995"
}
|
60acb3e9c6581230ea87a820196e9477c620f9a2
|
Stackoverflow Stackexchange
Q: RxJava2 TestObserver class - where is getOnNextEvent similar to TestSubscriber class? I am searching for a way to get the values returned from onNext in a subscriber so i can verify the results. TestSubscriber had a nice method called getOnNextEvent but when i use TestObserver i dont see a method like this i can use so that i can get the results to check it ? There all deprecated and when i check in the IDE there not even showing up.
Here is what i want to test:
`@Test
public void buildUseCaseObservable(){
TestObserver subscriber = TestObserver.create();
standardLoginUsecase.buildUseCaseObservable().subscribe(subscriber);
subscriber.assertNoErrors();
subscriber.assertSubscribed();
subscriber.assertComplete();
//i would like to test the actual onNext results also , but how ?
}`
UPDATE:
I FOUND a getEvents method but its deprecated. i dont see any alternative though.
A: TestObserver<List<User>> testObserver = new TestObserver<>();
testObserver.values();
Use values method to test the onNext Items
|
Q: RxJava2 TestObserver class - where is getOnNextEvent similar to TestSubscriber class? I am searching for a way to get the values returned from onNext in a subscriber so i can verify the results. TestSubscriber had a nice method called getOnNextEvent but when i use TestObserver i dont see a method like this i can use so that i can get the results to check it ? There all deprecated and when i check in the IDE there not even showing up.
Here is what i want to test:
`@Test
public void buildUseCaseObservable(){
TestObserver subscriber = TestObserver.create();
standardLoginUsecase.buildUseCaseObservable().subscribe(subscriber);
subscriber.assertNoErrors();
subscriber.assertSubscribed();
subscriber.assertComplete();
//i would like to test the actual onNext results also , but how ?
}`
UPDATE:
I FOUND a getEvents method but its deprecated. i dont see any alternative though.
A: TestObserver<List<User>> testObserver = new TestObserver<>();
testObserver.values();
Use values method to test the onNext Items
|
stackoverflow
|
{
"language": "en",
"length": 146,
"provenance": "stackexchange_0000F.jsonl.gz:842036",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471015"
}
|
ccaccd6a1a7cda43d08fd89efb4dafbfb646c06d
|
Stackoverflow Stackexchange
Q: Azure separation of environments I have a few Windows VMs on Microsoft Azure Cloud, their uses are: dev, test and production.
What would be the best way to separate the VMs to different isolated environments, so that people won't accidentally deploy a dev build on the prod server and things like that? At what entity level (billing, subscriptions, resource group...) should the separation happen?
Demands:
1. Different roles will be created to each environment, so dev people can't upload to test or prod.
2. Each environment should have the ability to define environment variables (for connection strings and passwords).
3. I don't use Visual Studio as my IDE.
4. I must use only one subscription, because I've got a subscription with free budget for a year, and I think that if I'd open another subscription - I'll have to pay.
A: As Peter Bons referred, the answer is: use a different Resource Group to represent an environment, and give users permissions to it.
link:
https://learn.microsoft.com/en-us/azure/active-directory/role-based-access-control-what-is
|
Q: Azure separation of environments I have a few Windows VMs on Microsoft Azure Cloud, their uses are: dev, test and production.
What would be the best way to separate the VMs to different isolated environments, so that people won't accidentally deploy a dev build on the prod server and things like that? At what entity level (billing, subscriptions, resource group...) should the separation happen?
Demands:
1. Different roles will be created to each environment, so dev people can't upload to test or prod.
2. Each environment should have the ability to define environment variables (for connection strings and passwords).
3. I don't use Visual Studio as my IDE.
4. I must use only one subscription, because I've got a subscription with free budget for a year, and I think that if I'd open another subscription - I'll have to pay.
A: As Peter Bons referred, the answer is: use a different Resource Group to represent an environment, and give users permissions to it.
link:
https://learn.microsoft.com/en-us/azure/active-directory/role-based-access-control-what-is
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:842077",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471122"
}
|
f3be437b5db52e58e617853a42b831fa26ed5ec6
|
Stackoverflow Stackexchange
Q: neovim + deoplete: how to enable 'timers' I want to make deoplete autocompletion suggestions pop up faster, that calls for setting
g:deoplete#auto_complete_delay
help says this requires +timers support. How do I enable this timers in config?
Thanks!
A: +timers is a compile-time feature of Vim that is available in Neovim from v0.1.5 onwards.
Compile-time features cannot be dynamically toggled, they are either there or not. +timers is an optional feature in Vim but non-optional in Neovim. So if you are using Neovim 0.1.5+, you already have the feature active. In fact, deoplete would not work properly without it.
You can verify that the feature is enabled: :echo has('timers'). If the result is 1, it's there, 0 mean it is not.
|
Q: neovim + deoplete: how to enable 'timers' I want to make deoplete autocompletion suggestions pop up faster, that calls for setting
g:deoplete#auto_complete_delay
help says this requires +timers support. How do I enable this timers in config?
Thanks!
A: +timers is a compile-time feature of Vim that is available in Neovim from v0.1.5 onwards.
Compile-time features cannot be dynamically toggled, they are either there or not. +timers is an optional feature in Vim but non-optional in Neovim. So if you are using Neovim 0.1.5+, you already have the feature active. In fact, deoplete would not work properly without it.
You can verify that the feature is enabled: :echo has('timers'). If the result is 1, it's there, 0 mean it is not.
|
stackoverflow
|
{
"language": "en",
"length": 121,
"provenance": "stackexchange_0000F.jsonl.gz:842138",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471328"
}
|
c5bf9adc41475206d86937bbef3e4da430738483
|
Stackoverflow Stackexchange
Q: Uncaught DOMException: Failed to execute 'define' on 'CustomElementRegistry' (Polymer 2.0) I'm facing this issue while running polymer init on polymer-cli.
Uncaught DOMException: Failed to execute 'define' on 'CustomElementRegistry'
A: Possible reasons:
- Element name starts with uppercase alphabet
- Element name does not have a hyphen in it (Thanks to Margherita Lazzarini)
Long story:
I was working with polymer CLI and when I ran polymer init, among the series of options it asks me, one of them was Main element name for which I entered Polymer-test-element.
It was giving me this error :
Uncaught DOMException: Failed to execute 'define' on 'CustomElementRegistry': "Polymer-test-element" is not a valid custom element name
The problem was I had used an uppercase alphabet in the declared element name. So when I replaced 'P' with 'p' it resolved the issue.
Hope this helps you :)
|
Q: Uncaught DOMException: Failed to execute 'define' on 'CustomElementRegistry' (Polymer 2.0) I'm facing this issue while running polymer init on polymer-cli.
Uncaught DOMException: Failed to execute 'define' on 'CustomElementRegistry'
A: Possible reasons:
- Element name starts with uppercase alphabet
- Element name does not have a hyphen in it (Thanks to Margherita Lazzarini)
Long story:
I was working with polymer CLI and when I ran polymer init, among the series of options it asks me, one of them was Main element name for which I entered Polymer-test-element.
It was giving me this error :
Uncaught DOMException: Failed to execute 'define' on 'CustomElementRegistry': "Polymer-test-element" is not a valid custom element name
The problem was I had used an uppercase alphabet in the declared element name. So when I replaced 'P' with 'p' it resolved the issue.
Hope this helps you :)
A: Probably you have defined a Custom Element without a hypen (-) in its name.
See this answer
A: Check your import, maybe you imported an element with e.g.
<link rel="import" href="../../bower_components/iron-icons/av-icons.html">
instead of
<link rel="import" href="../iron-icons/av-icons.html">
which could both be a valid path but the first one got me the DOMException.
|
stackoverflow
|
{
"language": "en",
"length": 192,
"provenance": "stackexchange_0000F.jsonl.gz:842195",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471514"
}
|
99c9eb4fa06c65b8794083e792fd1eb026f39108
|
Stackoverflow Stackexchange
Q: how to validate selectize dropdownlist in mvc 4.5? I am working with mvc in framework-4.5. In all other fields validation is working properly, but i am finding it difficult for selectize dropdownlist. Validation is working properly in simple dropdownlist also.
I tried to show message using field-validation-error and input-validation-error but not getting any success. Here are some changes i made in jquery.validate.unobtrusive.js.
function onError(error, inputElement) { // 'this' is the form element
var container = $(this).find("[data-valmsg-for='" + escapeAttributeValue(inputElement[0].name) + "']"),
replaceAttrValue = container.attr("data-valmsg-replace"),
replace = replaceAttrValue ? $.parseJSON(replaceAttrValue) !== false : null;
container.removeClass("field-validation-valid").addClass("field-validation-error");
error.data("unobtrusiveContainer", container);
if (replace) {
container.empty();
error.removeClass("input-validation-error-+-").appendTo(container);
}
else {
error.hide();
}
//For Validation Toggel Start
debugger;
if ($(inputElement).parent().hasClass("selectize-input")) {
$(inputElement).parent().parent().parent().addClass("md-input-danger");
var container = error.data("unobtrusiveContainer");
container.removeClass("field-validation-valid").addClass("field-validation-error");
}
}
I did lots of research for this but i didn't get any proper solution.
please help me to solve this issue.
Thanks
A: add below JQuery code in document ready to validate your selectize dropdown
$.validator.setDefaults({
ignore: ':hidden:not([class~=selectized]),:hidden > .selectized, .selectize-control .selectize-input input'
});
|
Q: how to validate selectize dropdownlist in mvc 4.5? I am working with mvc in framework-4.5. In all other fields validation is working properly, but i am finding it difficult for selectize dropdownlist. Validation is working properly in simple dropdownlist also.
I tried to show message using field-validation-error and input-validation-error but not getting any success. Here are some changes i made in jquery.validate.unobtrusive.js.
function onError(error, inputElement) { // 'this' is the form element
var container = $(this).find("[data-valmsg-for='" + escapeAttributeValue(inputElement[0].name) + "']"),
replaceAttrValue = container.attr("data-valmsg-replace"),
replace = replaceAttrValue ? $.parseJSON(replaceAttrValue) !== false : null;
container.removeClass("field-validation-valid").addClass("field-validation-error");
error.data("unobtrusiveContainer", container);
if (replace) {
container.empty();
error.removeClass("input-validation-error-+-").appendTo(container);
}
else {
error.hide();
}
//For Validation Toggel Start
debugger;
if ($(inputElement).parent().hasClass("selectize-input")) {
$(inputElement).parent().parent().parent().addClass("md-input-danger");
var container = error.data("unobtrusiveContainer");
container.removeClass("field-validation-valid").addClass("field-validation-error");
}
}
I did lots of research for this but i didn't get any proper solution.
please help me to solve this issue.
Thanks
A: add below JQuery code in document ready to validate your selectize dropdown
$.validator.setDefaults({
ignore: ':hidden:not([class~=selectized]),:hidden > .selectized, .selectize-control .selectize-input input'
});
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:842208",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471575"
}
|
ab6ad2d2148c3d0982fc2af85202151288cbdf55
|
Stackoverflow Stackexchange
Q: What is the 'import * as ...' equivalent for require? When using the ES6 import command, you can use an alias to import all functions from a file, for example:
import * as name from "module-name";
Is there an equivalent way to do this using require, i.e.:
const { * as name } = require('module-name');
A: const name = require('moduleName.js');
This means that when you have (moduleName.js)...
function foo(){
...
}
module.exports = { foo };
...the foo() function can be accessed by another file using:
const name = require('moduleName.js');
name.foo();
|
Q: What is the 'import * as ...' equivalent for require? When using the ES6 import command, you can use an alias to import all functions from a file, for example:
import * as name from "module-name";
Is there an equivalent way to do this using require, i.e.:
const { * as name } = require('module-name');
A: const name = require('moduleName.js');
This means that when you have (moduleName.js)...
function foo(){
...
}
module.exports = { foo };
...the foo() function can be accessed by another file using:
const name = require('moduleName.js');
name.foo();
A:
As simple as:
const name = require('module-name')
Usage:
name.yourObjectName
|
stackoverflow
|
{
"language": "en",
"length": 102,
"provenance": "stackexchange_0000F.jsonl.gz:842216",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471610"
}
|
5e6eeb03db9ad9519d6a32183a99740a82d54ea1
|
Stackoverflow Stackexchange
Q: Setting Hibernate Dialect not working with Spring and YML config this is my config:
spring.jpa:
hibernate:
ddl-auto: update
connection:
charset: utf8
useUnicode: true
properties.hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
Based on what I found in docs and SO it should work but still new tables are create with MyISAM instead of InnoDB.
What is wrong in my config?
A: The property for setting the dialect is actually spring.jpa.properties.hibernate.dialect
Try this:
spring.jpa:
hibernate:
connection:
charset: utf8
useUnicode: true
ddl-auto: update
properties.hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
Spring boot sample for reference
|
Q: Setting Hibernate Dialect not working with Spring and YML config this is my config:
spring.jpa:
hibernate:
ddl-auto: update
connection:
charset: utf8
useUnicode: true
properties.hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
Based on what I found in docs and SO it should work but still new tables are create with MyISAM instead of InnoDB.
What is wrong in my config?
A: The property for setting the dialect is actually spring.jpa.properties.hibernate.dialect
Try this:
spring.jpa:
hibernate:
connection:
charset: utf8
useUnicode: true
ddl-auto: update
properties.hibernate.dialect: org.hibernate.dialect.MySQL5InnoDBDialect
Spring boot sample for reference
A: Provide below change in your application.yml
spring.datasource:
url: jdbc:mysql://?verifyServerCertificate=false&useSSL=true&requireSSL=false
username:
password:
spring.jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.MySQLDialect
It will work :)
|
stackoverflow
|
{
"language": "en",
"length": 104,
"provenance": "stackexchange_0000F.jsonl.gz:842217",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471613"
}
|
ddd822294e70f02ead93acf07e76808571726ddd
|
Stackoverflow Stackexchange
Q: firebase init command failing to execute Can someone help me to solve this error I cannot run firebase init command before running firebase deploy.
Error: Authentication Error: Your credentials are no longer valid. Please run firebase login --reauth
For CI servers and headless environments, generate a new token with firebase login:ci
A: if you are behind proxy invoke set "NODE_TLS_REJECT_UNAUTHORIZED=0"
as described here
|
Q: firebase init command failing to execute Can someone help me to solve this error I cannot run firebase init command before running firebase deploy.
Error: Authentication Error: Your credentials are no longer valid. Please run firebase login --reauth
For CI servers and headless environments, generate a new token with firebase login:ci
A: if you are behind proxy invoke set "NODE_TLS_REJECT_UNAUTHORIZED=0"
as described here
A: Your credentials are not valid
All you need is to login again
Try the command firebase login --reauth
A: For such Authentication Error from Firebase CLI.
Do the below steps:
1. firebase logout
2. firebase login
3. Once the URL opens, go to Google account and remove access which was already given to firebase app distribution ["Remove Access"]
4. Come back to the URL window and select Allow
5. Should result in a successful login
Even after this repeatedly getting Firebase login CLI issue - suggest restart your system and do firebase login again
Hope this helps!
A: Please try this command
set "NODE_TLS_REJECT_UNAUTHORIZED=0"
and then re-run,
firebase login
|
stackoverflow
|
{
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:842234",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44471670"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.