\\n\";\n}"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17772,"cells":{"_id":{"kind":"string","value":"d17773"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"Create the ZIP file locally and use either commons-net FTP or SFTP to move the ZIP file across to the remote location, assuming that by \"remote location\" you mean some FTP server, or possibly a blade on your network.\nIf you are using the renameTo method on java.io.File, note that this doesn't work on some operating systems (e.g. Solaris) where the locations are on different shares. You would have to do a manual copy of the file data from one location to another. This is pretty simple using standard Java I/O."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17773,"cells":{"_id":{"kind":"string","value":"d17774"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"It seems you forgot the return in the if clause. There's one in the else but none in the if.\n\nA: @furas' code made iterative instead of recursive:\ndef radiationExposure2(start, stop, step):\n\n totalExposure = 0\n time = stop - start\n newStart = start + step\n oldStart = start\n\n while time > 0:\n totalExposure += f(oldStart)*step\n\n time = stop - newStart\n oldStart = newStart\n newStart += step\n\n return totalExposure\n\nConverted to a for-loop:\ndef radiationExposure3(start, stop, step):\n\n totalExposure = 0\n for time in range(start, stop, step):\n totalExposure += f(time) * step\n\n return totalExposure\n\nUsing a generator expression:\ndef radiationExposure4(start, stop, step):\n return sum(f(time) * step for time in range(start, stop, step))\n\n\nA: As Paulo mentioned, your if statement had no return. Plus, you were referencing the variable radiation before it was assigned. A few tweaks and I am able to get it working.\nglobal totalExposure\ntotalExposure = 0 \n\ndef f(x):\n import math\n return 10 * math.e**(math.log(0.5)/5.27 * x)\n\ndef radiationExposure(start, stop, step):\n\n time = (stop-start)\n newStart = start+step\n\n if(time!=0):\n radiationExposure(newStart, stop, step) \n global totalExposure\n radiation = f(start) * step\n totalExposure += radiation\n return totalExposure\n else:\n return totalExposure\n\nrad = radiationExposure(0, 5, 1)\n# rad = 39.1031878433\n\n\nA: Cleaner version without global\nimport math\n\ndef f(x):\n return 10*math.e**(math.log(0.5)/5.27 * x)\n\ndef radiationExposure(start, stop, step):\n\n totalExposure = 0\n time = stop - start\n newStart = start + step\n\n if time > 0:\n totalExposure = radiationExposure(newStart, stop, step) \n totalExposure += f(start)*step\n\n return totalExposure\n\nrad = radiationExposure(0, 5, 1)\n\n# rad = 39.1031878432624\n\n\nA: As other mentioned, your if statement had no return. It seems you forgot the return in the if clause. There's one in the else but none in the if."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17774,"cells":{"_id":{"kind":"string","value":"d17775"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"For setting up of environment variables.\n1) Right-click the My Computer icon on your desktop and select Properties.\n2) Click the Advanced tab.\n3) Click the Environment Variables button.\n4) Under System Variables, click New.\n5) Enter the variable name as JAVA_HOME.\n6) Enter the variable value as the installation path for the Java Development Kit.\n If your Java installation directory has a space in its path name, you should use the shortened path name (e.g. C:\\Progra~1\\Java\\jre6) in the environment variable instead.\nNote for Windows users on 64-bit systems\nProgra~1 = 'Program Files'\nProgra~2 = 'Program Files(x86)'\n\n7 )Click OK.\n8) Click Apply Changes.\n9) Close any command window which was open before you made these changes, and open a new command window. There is no way to reload environment variables from an active command prompt. If the changes do not take effect even after reopening the command window, restart Windows.\n10) If you are running the Confluence EAR/WAR distribution, rather than the regular Confluence distribution, you may need to restart your application server.\nDoes one need to install ant if the ant libraries are already present in NetBeans?\nNo. You don't need to install it again.\nIs there a better way to import the sphinx jars into my .java project in NetBeans than through using Cygwin?\nUsing Cygwin(linux environment in Windows) definately works , but unsure about any other method."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17775,"cells":{"_id":{"kind":"string","value":"d17776"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"The following works with a CSV file. You may need to do this before proceding.\n\n\n\n \n StackOverflow\n\n\n \n
\n \n \n \n"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17776,"cells":{"_id":{"kind":"string","value":"d17777"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"for javaFX there is a library with write back support DataFX2.0\nSample Examples can be found here\nIf you need any further help on datafx then you can post in datafx google groups Link"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17777,"cells":{"_id":{"kind":"string","value":"d17778"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"seems like you want to query your DOM by a specific tag, similar to jquery selectors. Take a look at the project below, it might be what you are looking for. \nhttps://github.com/jamietre/csquery\n\nA: Load the HTML into an HtmlDocument object, then select the first node where the text input appears. The node has everything you might need:\n var doc = new HtmlDocument();\n string input = \"Product 1\";\n doc.LoadHtml(\"Your HTML here\"); //Or doc.Load(), depends on how you're getting your HTML\n\n HtmlNode selectedNode = doc.DocumentNode.SelectSingleNode(string.Format(\"//*[contains(text(),'{0}')]\", input));\n\n var tagName = selectedNode.Name;\n var tagClass = selectedNode.Attributes[\"class\"].Value;\n //etc\n\nOf course this all depends on the actual page structure, whether \"Product 1\" is shown anywhere else, whether other elements in the page also use the same node that contains \"Product 1\", etc."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17778,"cells":{"_id":{"kind":"string","value":"d17779"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"Google has fixed the issue: https://issuetracker.google.com/issues/112692348\nI was able to run queries this morning using ordinal and offset with no issues."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17779,"cells":{"_id":{"kind":"string","value":"d17780"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"You need to add a GROUP BY clause:\nSELECT CHINFO.CHILDID\n , COUNT(1)\nFROM BKA.CHILDEVENTS CHE\n JOIN BKA.CHILDEVENTPROPERITIES CHEP ON CHEP.EVENTID = CHE.EVENTID\n JOIN BKA.CHILDINFORMATION CHINFO ON CHE.CHILDID = CHINFO.CHILDID\nWHERE ( CHE.TYPE = 'ACCIDENT'\n OR ( CHE.TYPE = 'BREAK'\n AND CHEP.PROPERTY = 'SUCCESS'\n AND CHEP.PROPERTYVALUE = 'FALSE'\n )\n )\n AND CHE.ADDDATE BETWEEN DATEADD(DD,\n -( DATEPART(DW, @DATETIMENOW - 7) - 1 ),\n @DATETIMENOW - 7)\n AND DATEADD(DD,\n 7 - ( DATEPART(DW, @DATETIMENOW - 7) ),\n @DATETIMENOW - 7)\nGROUP BY CHINFO.CHILDID\n\n\nA: A value in the where will invalidate an outer join\nSELECT CHE.CHILDID\n , COUNT(1)\n FROM BKA.CHILDEVENTS CHE\n LEFT JOIN BKA.CHILDEVENTPROPERITIES CHEP \n ON CHEP.EVENTID = CHE.EVENTID\n AND ( CHE.TYPE = 'ACCIDENT'\n OR ( CHE.TYPE = 'BREAK'\n AND CHEP.PROPERTY = 'SUCCESS'\n AND CHEP.PROPERTYVALUE = 'FALSE'\n )\n )\n AND CHE.ADDDATE BETWEEN DATEADD(DD,\n -( DATEPART(DW, @DATETIMENOW - 7) - 1 ),\n @DATETIMENOW - 7)\n AND DATEADD(DD,\n 7 - ( DATEPART(DW, @DATETIMENOW - 7) ),\n @DATETIMENOW - 7) \nGROUP BY CHE.CHILDID\n\n\nA: DECLARE @DATETIMENOW DATETIME \n SET @DATETIMENOW = GETDATE()\n\n SELECT B.WEEK FROM BKA.CHILDINFORMATION CI LEFT OUTER JOIN \n\n (SELECT Distinct CHINFO.CHILDID,COUNT(*) as week FROM BKA.CHILDINFORMATION CHINFO \n JOIN BKA.CHILDEVENTS CHE \n ON CHE.CHILDID = CHINFO.CHILDID \n JOIN BKA.CHILDEVENTPROPERITIES CHEP \n ON CHE.EVENTID = CHEP.EVENTID \n WHERE \n (CHE.TYPE = 'ACCIDENT' OR (CHE.TYPE = 'POTTYBREAK' AND CHEP.PROPERTY = 'SUCCESS' \n AND CHEP.PROPERTYVALUE = 'FALSE')) \n AND \n CHE.ADDDATE \n BETWEEN DATEADD(DD, -(DATEPART(DW, @DATETIMENOW-14)-1), @DATETIMENOW-14) AND \n DATEADD(DD, 7-(DATEPART(DW, @DATETIMENOW-14)), @DATETIMENOW-14) group by CHINFO.CHILDID) b\n on CI.ChildID = b.ChildID"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17780,"cells":{"_id":{"kind":"string","value":"d17781"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"This is a duplicate of SLComposeViewController setInitialText not showing up in View.\nThis behaviour is by design; prefilling was not allowed by policy, and now it's also enforced. \nAbout the cancel button; this is a known issue and will be fixed. See bug report: https://developers.facebook.com/bugs/962985360399542/"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17781,"cells":{"_id":{"kind":"string","value":"d17782"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"The most efficient way might be to import the OSM data of the specific area to a local postGIS database using Osm2pgsql or ImpOsm and do your analytics there."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17782,"cells":{"_id":{"kind":"string","value":"d17783"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"Crap4j is one fairly good metrics that I'm aware of...\nIts a Java implementation of the Change Risk Analysis and Predictions software metric which combines cyclomatic complexity and code coverage from automated tests.\n\nA: If you are looking for some useful metrics that tell you about the quality (or lack there of) of your code, you should look into the following metrics:\n\n\n*\n\n*Cyclomatic Complexity\n\n\n*\n\n*This is a measure of how complex a method is. \n\n*Usually 10 and lower is good, 11-25 is poor, higher is terrible.\n\n\n*Nesting Depth\n\n\n*\n\n*This is a measure of how many nested scopes are in a method.\n\n*Usually 4 and lower is good, 5-8 is poor, higher is terrible.\n\n\n*Relational Cohesion\n\n\n*\n\n*This is a measure of how well related the types in a package or assembly are.\n\n*Relational cohesion is somewhat of a relative metric, but useful none the less.\n\n*Acceptable levels depends on the formula. Given the following:\n\n\n*\n\n*R: number of relationships in package/assembly\n\n*N: number of types in package/assembly\n\n*H: Cohesion of relationship between types\n\n\n*Formula: H = (R+1)/N\n\n*Given the above formula, acceptable range is 1.5 - 4.0\n\n\n*Lack of Cohesion of Methods (LCOM)\n\n\n*\n\n*This is a measure of how cohesive a class is.\n\n*Cohesion of a class is a measure of how many fields each method references.\n\n*Good indication of whether your class meets the Principal of Single Responsibility.\n\n*Formula: LCOM = 1 - (sum(MF)/M*F)\n\n\n*\n\n*M: number of methods in class\n\n*F: number of instance fields in class\n\n*MF: number of methods in class accessing a particular instance field\n\n*sum(MF): the sum of MF over all instance fields\n\n\n*A class that is totally cohesive will have an LCOM of 0.\n\n*A class that is completely non-cohesive will have an LCOM of 1.\n\n*The closer to 0 you approach, the more cohesive, and maintainable, your class.\n\n\n\nThese are just some of the key metrics that NDepend, a .NET metrics and dependency mapping utility, can provide for you. I recently did a lot of work with code metrics, and these 4 metrics are the core key metrics that we have found to be most useful. NDepend offers several other useful metrics, however, including Efferent & Afferent coupling and Abstractness & Instability, which combined provide a good measure of how maintainable your code will be (and whether or not your in what NDepend calls the Zone of Pain or the Zone of Uselessness.) \nEven if you are not working with the .NET platform, I recommend taking a look at the NDepend metrics page. There is a lot of useful information there that you might be able to use to calculate these metrics on whatever platform you develop on.\n\nA: Bug metrics are also important:\n\n\n*\n\n*Number of bugs coming in\n\n*Number of bugs resolved\n\n\nTo detect for instance if bugs are not resolved as fast as new come in.\n\nA: Code Coverage is just an indicator and helps pointing out lines which are not executed at all in your tests, which is quite interesting. If you reach 80% code coverage or so, it starts making sense to look at the remaining 20% of lines to identify if you are missing some use case. If you see \"aha, this line gets executed if I pass an empty vector\" then you can actually write a test which passes an empty vector.\nAs an alternative I can think of, if you have a specs document with Use Cases and Functional Requirements, you should map the unit tests to them and see how many UC are covered by FR (of course it should be 100%) and how many FR are covered by UT (again, it should be 100%).\nIf you don't have specs, who cares? Anything that happens will be ok :)\n\nA: What about watching the trend of code coverage during your project?\nAs it is the case with many other metrics a single number does not say very much.\nFor example it is hard to tell wether there is a problem if \"we have a Checkstyle rules compliance of 78.765432%\". If yesterday's compliance was 100%, we are definitely in trouble. If it was 50% yesterday, we are probably doing a good job.\nI alway get nervous when code coverage has gotten lower and lower over time. There are cases when this is okay, so you cannot turn off your head when looking at charts and numbers.\nBTW, sonar (http://sonar.codehaus.org/) is a great tool for watching trends.\n\nA: Using code coverage on it's own is mostly pointless, it gives you only insight if you are looking for unnecessary code.\nUsing it together with unit-tests and aiming for 100% coverage will tell you that all the 'tested' parts (assumed it was all successfully too) work as specified in the unit-test.\nWriting unit-tests from a technical design/functional design, having 100% coverage and 100% successful tests will tell you that the program is working like described in the documentation.\nNow the only thing you need is good documentation, especially the functional design, a programmer should not write that unless (s)he is an expert of that specific field.\n\nA: Scenario coverage.\nI don't think you really want to have 100% code coverage. Testing say, simple getters and setters looks like a waste of time.\nThe code always runs in some context, so you may list as many scenarios as you can (depending on the problem complexity sometimes even all of them) and test them.\nExample:\n// parses a line from .ini configuration file\n// e.g. in the form of name=value1,value2\nList parseConfig(string setting)\n{\n (name, values) = split_string_to_name_and_values(setting, '=')\n values_list = split_values(values, ',')\n return values_list\n}\n\nNow, you have many scenarios to test. Some of them:\n\n\n*\n\n*Passing correct value\n\n*List item\n\n*Passing null\n\n*Passing empty string\n\n*Passing ill-formated parameter\n\n*Passing string with with leading or ending comma e.g. name=value1, or name=,value2\nRunning just first test may give you (depending on the code) 100% code coverage. But you haven't considered all the posibilities, so that metric by itself doesn't tell you much.\n\nA: How about (lines of code)/(number of test cases)? Not extremely meaningful (since it depends on LOC), but at least it's easy to calculate.\nAnother one could be (number of test cases)/(number of methods).\n\nA: As a rule of thumb, defect injection rates proportionally trail code yield and they both typically follow a Rayleigh distribution curve.\nAt some point your defect detection rate will peak and then start to diminish.\nThis apex represents 40% of discovered defects.\nMoving forward with simple regression analysis you can estimate how many defects remain in your product at any point following the peak.\nThis is one component of Lawrence Putnam's model. \n\nA: I wrote a blog post about why High Test Coverage Ratio is a Good Thing Anyway.\nI agree that: when a portion of code is executed by tests, it doesn’t mean that the validity of the results produced by this portion of code is verified by tests. \nBut still, if you are heavily using contracts to check states validity during tests execution, high test coverage will mean a lot of verification anyway.\n\nA: The value in code coverage is it gives you some idea of what has been exercised by tests.\nThe phrase \"code coverage\" is often used to mean statement coverage, e.g., \"how much of my code (in lines) has been executed\", but in fact there are over a hundred varieties of \"coverage\". These other versions of coverage try to provide a more sophisticated view what it means to exercise code.\nFor example, condition coverage measures how many of the separate elements of conditional expressions have been exercised. This is different than statement coverage. MC/DC\n\"modified condition/decision coverage\" determines whether the elements of all conditional expressions have all been demonstrated to control the outcome of the conditional, and is required by the FAA for aircraft software. Path coverage meaures how many of the possible execution paths through your code have been exercised. This is a better measure than statement coverage, in that paths essentially represent different cases in the code. Which of these measures is best to use depends on how concerned you are about the effectiveness of your tests.\nWikipedia discusses many variations of test coverage reasonably well.\nhttp://en.wikipedia.org/wiki/Code_coverage\n\nA: This hasn't been mentioned, but the amount of change in a given file of code or method (by looking at version control history) is interesting particularly when you're building up a test suite for poorly tested code. Focus your testing on the parts of the code you change a lot. Leave the ones you don't for later.\nWatch out for a reversal of cause and effect. You might avoid changing untested code and you might tend to change tested code more.\n\nA: SQLite is an extremely well-tested library, and you can extract all kinds of metrics from it.\n\nAs of version 3.6.14 (all statistics in the report are against that release of SQLite), the SQLite library consists of approximately 63.2 KSLOC of C code. (KSLOC means thousands of \"Source Lines Of Code\" or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 715 times as much test code and test scripts - 45261.5 KSLOC.\n\nIn the end, what always strikes me as the most significant is none of those possible metrics seem to be as important as the simple statement, \"it meets all the requirements.\" (So don't lose sight of that goal in the process of achieving it.)\nIf you want something to judge a team's progress, then you have to lay down individual requirements. This gives you something to point to and say \"this one's done, this one isn't\". It's not linear (solving each requirement will require varying work), and the only way you can linearize it is if the problem has already been solved elsewhere (and thus you can quantize work per requirement).\n\nA: I like revenue, sales numbers, profit. They are pretty good metrics of a code base.\n\nA: Probably not only measuring the code covered (touched) by the unit tests but how good the assertions are. \nOne metric easy to implement is to measure the size of the Assert.AreEqual\nYou can create your own Assert implementation calling Assert.AreEqual and measuring the size of the object passed as second parameter."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17783,"cells":{"_id":{"kind":"string","value":"d17784"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"The problem is that your package was encrypted by a user. This could have been you are you loging in to the pc with a diffrent login or from a diffrent machine? Your not going to be able to open it until you figure out who encrypted it or from what account it was encrypted from."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17784,"cells":{"_id":{"kind":"string","value":"d17785"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"It is theme issue on your end,probably textcolor set to white in your theme change these\n@color/white\n@color/white\n\nchange it to black\n\nA: your row_layout.xml file textview in set textcolor:\n \n\nA: set your adapter upon listview in the last.............."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17785,"cells":{"_id":{"kind":"string","value":"d17786"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"With gradle 3 implemention was introduced. Replace compile with implementation.\nUse this instead.\npom.withXml {\n def dependenciesNode = asNode().appendNode('dependencies')\n configurations.implementation.allDependencies.each {\n def dependencyNode = dependenciesNode.appendNode('dependency')\n dependencyNode.appendNode('groupId', it.group)\n dependencyNode.appendNode('artifactId', it.name)\n dependencyNode.appendNode('version', it.version)\n }\n}\n\n\nA: I was able to work around this by having the script add the dependencies to the pom directly using pom.withXml.\n//The publication doesn't know about our dependencies, so we have to manually add them to the pom\npom.withXml {\n def dependenciesNode = asNode().appendNode('dependencies')\n\n //Iterate over the compile dependencies (we don't want the test ones), adding a node for each\n configurations.compile.allDependencies.each {\n def dependencyNode = dependenciesNode.appendNode('dependency')\n dependencyNode.appendNode('groupId', it.group)\n dependencyNode.appendNode('artifactId', it.name)\n dependencyNode.appendNode('version', it.version)\n }\n}\n\nThis works for my project, it may have unforeseen consequences in others.\n\nA: Kotlin DSL version of the accepted answer:\n create(\"maven\") {\n groupId = \"com.example\"\n artifactId = \"sdk\"\n version = Versions.sdkVersionName\n artifact(\"$buildDir/outputs/aar/Example-release.aar\")\n pom.withXml {\n val dependenciesNode = asNode().appendNode(\"dependencies\")\n val configurationNames = arrayOf(\"implementation\", \"api\")\n configurationNames.forEach { configurationName ->\n configurations[configurationName].allDependencies.forEach {\n if (it.group != null) {\n val dependencyNode = dependenciesNode.appendNode(\"dependency\")\n dependencyNode.appendNode(\"groupId\", it.group)\n dependencyNode.appendNode(\"artifactId\", it.name)\n dependencyNode.appendNode(\"version\", it.version)\n }\n }\n }\n }\n }\n\n\nA: I'm upgraded C.Ross solution. This example will generate pom.xml with dependecies from compile configuration and also with special build type dependecies, for example if you use different dependencies for release or debug version (debugCompile and releaseCompile). And also it adding exlusions \npublishing {\n publications {\n // Create different publications for every build types (debug and release)\n android.buildTypes.all { variant ->\n // Dynamically creating publications name\n \"${variant.name}Aar\"(MavenPublication) {\n\n def manifest = new XmlSlurper().parse(project.android.sourceSets.main.manifest.srcFile);\n def libVersion = manifest['@android:versionName'].text()\n def artifactName = project.getName()\n\n // Artifact properties\n groupId GROUP_ID\n version = libVersion\n artifactId variant.name == 'debug' ? artifactName + '-dev' : artifactName\n\n // Tell maven to prepare the generated \"*.aar\" file for publishing\n artifact(\"$buildDir/outputs/aar/${project.getName()}-${variant.name}.aar\")\n\n pom.withXml {\n //Creating additional node for dependencies\n def dependenciesNode = asNode().appendNode('dependencies')\n\n //Defining configuration names from which dependencies will be taken (debugCompile or releaseCompile and compile)\n def configurationNames = [\"${variant.name}Compile\", 'compile']\n\n configurationNames.each { configurationName ->\n configurations[configurationName].allDependencies.each {\n if (it.group != null && it.name != null) {\n def dependencyNode = dependenciesNode.appendNode('dependency')\n dependencyNode.appendNode('groupId', it.group)\n dependencyNode.appendNode('artifactId', it.name)\n dependencyNode.appendNode('version', it.version)\n\n //If there are any exclusions in dependency\n if (it.excludeRules.size() > 0) {\n def exclusionsNode = dependencyNode.appendNode('exclusions')\n it.excludeRules.each { rule ->\n def exclusionNode = exclusionsNode.appendNode('exclusion')\n exclusionNode.appendNode('groupId', rule.group)\n exclusionNode.appendNode('artifactId', rule.module)\n }\n }\n }\n }\n }\n }\n }\n }\n }\n}\n\n\nA: I guess it has something to do with the from components.java directive, as seen in the guide. I had a similar setup and it made the difference to add the line into the publication block:\npublications {\n mavenJar(MavenPublication) {\n artifactId 'rest-security'\n artifact jar\n from components.java\n }\n}\n\n\nA: I was using the maven-publish plugin for publishing my aar dependency and actually I could not use the maven task in my case. So I used the mavenJava task provided by the maven-publish plugin and used that as follows. \napply plugin 'maven-publish'\n\npublications {\n mavenAar(MavenPublication) {\n from components.android\n }\n\n mavenJava(MavenPublication) {\n pom.withXml {\n def dependenciesNode = asNode().appendNode('dependencies')\n // Iterate over the api dependencies (we don't want the test ones), adding a node for each\n configurations.api.allDependencies.each {\n def dependencyNode = dependenciesNode.appendNode('dependency')\n dependencyNode.appendNode('groupId', it.group)\n dependencyNode.appendNode('artifactId', it.name)\n dependencyNode.appendNode('version', it.version)\n }\n }\n }\n}\n\nI hope that it helps someone who is looking for help on how to publish the aar along with pom file using the maven-publish plugin. \n\nA: now that compile is deprecated we have to use implementation.\npom.withXml {\ndef dependenciesNode = asNode().appendNode('dependencies')\n configurations.implementation.allDependencies.each {\n def dependencyNode = dependenciesNode.appendNode('dependency')\n dependencyNode.appendNode('groupId', it.group)\n dependencyNode.appendNode('artifactId', it.name)\n dependencyNode.appendNode('version', it.version)\n}"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17786,"cells":{"_id":{"kind":"string","value":"d17787"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"('0' to 'z').filter(_.isLetterOrDigit).toSet\n\n\nA: A more functional version of your code is this:\nscala> Traversable(('A' to 'Z'), ('a' to 'z'), ('0' to '9')) map (_ toSet) reduce (_ ++ _)\n\nCombining it with the above solutions, one gets:\nscala> Seq[Seq[Char]](('A' to 'Z'), ('a' to 'z'), ('0' to '9')) reduce (_ ++ _) toSet\n\nIf you have just three sets, the other solutions are simpler, but this structure also works nicely if you have more ranges or they are given at runtime.\n\nA: How about this:\nscala> ('a' to 'z').toSet ++ ('A' to 'Z') ++ ('0' to '9')\nres0: scala.collection.immutable.Set[Char] = Set(E, e, X, s, x, 8, 4, n, 9, N, j, y, T, Y, t, J, u, U, f, F, A, a, 5, m, M, I, i, v, G, 6, 1, V, q, Q, L, b, g, B, l, P, p, 0, 2, C, H, c, W, h, 7, r, K, w, R, 3, k, O, D, Z, o, z, S, d)\n\nOr, alternatively:\nscala> (('a' to 'z') ++ ('A' to 'Z') ++ ('0' to '9')).toSet\nres0: scala.collection.immutable.Set[Char] = Set(E, e, X, s, x, 8, 4, n, 9, N, j, y, T, Y, t, J, u, U, f, F, A, a, 5, m, M, I, i, v, G, 6, 1, V, q, Q, L, b, g, B, l, P, p, 0, 2, C, H, c, W, h, 7, r, K, w, R, 3, k, O, D, Z, o, z, S, d)\n\n\nA: I guess it can't be simpler than this:\n('a' to 'z') ++ ('A' to 'Z') ++ ('0' to '9')\n\nYou might guess that ('A' to 'z') will include both, but it also adds some extra undesirable characters, namely: \n([, \\, ], ^, _, `)\n\nNote:\nThis will not return a Set but an IndexedSeq. I assumed you don't mind the implementation, but if you do, and do want a Set, just call toSet to the result.\n\nA: If you want to generate all the possible characters, doing this should generate all the values a char can take:\n(' ' to '~').toSet"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17787,"cells":{"_id":{"kind":"string","value":"d17788"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"Most Heroku CLI commands support the -a parameter to specify the application, in this case:\nheroku buildpacks:set heroku/nodejs -a "},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17788,"cells":{"_id":{"kind":"string","value":"d17789"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"I agree with @Bickknght that the unpacking is unnecessary. Don't use unpacking when dealing with an unknown or variable number of elements.\nIn [57]: alist = [np.arange(10), np.arange(10,20), np.arange(20,30)] \n\nMaking a list of arrays where the we don't need the ravel.\nIn [58]: for arr in np.nditer(alist): \n ...: print(arr) \n ...: \n(array(0), array(10), array(20))\n(array(1), array(11), array(21))\n(array(2), array(12), array(22))\n(array(3), array(13), array(23))\n(array(4), array(14), array(24))\n(array(5), array(15), array(25))\n(array(6), array(16), array(26))\n(array(7), array(17), array(27))\n(array(8), array(18), array(28))\n(array(9), array(19), array(29))\n\nCompare this with a straight forward list zip iteration:\nIn [59]: for arr in zip(*alist): \n ...: print(arr) \n ...: \n(0, 10, 20)\n(1, 11, 21)\n(2, 12, 22)\n(3, 13, 23)\n(4, 14, 24)\n(5, 15, 25)\n(6, 16, 26)\n(7, 17, 27)\n(8, 18, 28)\n(9, 19, 29)\n\nThe difference is that nditer makes 0d arrays rather than scalars. So the elements have a shape ((0,)) and dtype. Or in some cases where you want to modify the arrays (but they have to be defined as read/write. Otherwise nditer does not offer any real advantages.\nIn [62]: %%timeit \n ...: ll = [] \n ...: for arr in np.nditer(alist): \n ...: ll.append(np.var(arr)) \n ...: \n539 µs ± 17.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\nIn [63]: %%timeit \n ...: ll = [] \n ...: for arr in zip(*alist): \n ...: ll.append(np.var(arr)) \n ...: \n524 µs ± 3.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n\nIf you can avoid the Python level loops, things will be lot faster:\nIn [65]: np.stack(alist,1) \nOut[65]: \narray([[ 0, 10, 20],\n [ 1, 11, 21],\n [ 2, 12, 22],\n [ 3, 13, 23],\n [ 4, 14, 24],\n [ 5, 15, 25],\n [ 6, 16, 26],\n [ 7, 17, 27],\n [ 8, 18, 28],\n [ 9, 19, 29]])\nIn [66]: np.var(np.stack(alist,1),axis=1) \nOut[66]: \narray([66.66666667, 66.66666667, 66.66666667, 66.66666667, 66.66666667,\n 66.66666667, 66.66666667, 66.66666667, 66.66666667, 66.66666667])\nIn [67]: timeit np.var(np.stack(alist,1),axis=1) \n66.7 µs ± 1.47 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n\nI've not attempted to test for -inf.\n===\nAnother important difference with nditer. It iterates on all elements in a flat sense - in effect it is do the ravel:\nMake a list of 2d arrays.\nIn [81]: alist = [np.arange(10.).reshape(2,5), np.arange(10,20.).reshape(2,5), np.arange(20,30.).reshape(2,5)] \n\nPLain iteration operates on the first dimension - in this case the 2, so zipped elements are arrays:\nIn [82]: for arr in zip(*alist): \n ...: print(arr) \n ...: \n(array([0., 1., 2., 3., 4.]), array([10., 11., 12., 13., 14.]), array([20., 21., 22., 23., 24.]))\n(array([5., 6., 7., 8., 9.]), array([15., 16., 17., 18., 19.]), array([25., 26., 27., 28., 29.]))\n\nnditer generates the same tuples as in the 1d array case. Some cases that's fine, but it's hard to avoid if if you don't want it.\nIn [83]: for arr in np.nditer(alist): \n ...: print(arr) \n ...: \n(array(0.), array(10.), array(20.))\n(array(1.), array(11.), array(21.))\n(array(2.), array(12.), array(22.))\n(array(3.), array(13.), array(23.))\n(array(4.), array(14.), array(24.))\n(array(5.), array(15.), array(25.))\n(array(6.), array(16.), array(26.))\n(array(7.), array(17.), array(27.))\n(array(8.), array(18.), array(28.))\n(array(9.), array(19.), array(29.))\n\n\nA: The zip function is a solution here, as explained by @hpaulj. Working with 2d arrays instead of 1d simply requires to use two times this function, as the following code shows :\nvariances = []\nfor arr in zip(*cost_surfaceS):\n for element in zip(*arr):\n if(float(\"-inf\") not in element):\n variance = np.var(element, dtype=np.float32)\n variances.append(variance)\n else:\n variances.append(float(\"-inf\"))\n\nThe -inf values are handled by the if condition that avoids computing the variance of arrays containing at least one infinity value."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17789,"cells":{"_id":{"kind":"string","value":"d17790"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"I'm reasonably confident it is to do with the order of your .antMatchers() statements.\nYou currently have .antMatchers(\"/secured/**\").fullyAuthenticated() before .antMatchers(\"/secured/admin/**\").hasRole(\"ADMIN\"). Spring Security is probably matching against this first matcher and applying the fullyAuthenticated() check, which will mean that authorisation is given if you only have role USER.\nI would suggest re-ordering things so that your .antMatchers() statements are like this:\n.antMatchers(\"/public/login.jsp\").permitAll()\n.antMatchers(\"/public/home.jsp\").permitAll()\n.antMatchers(\"/public/**\").permitAll()\n.antMatchers(\"/resources/clients/**\").fullyAuthenticated()\n.antMatchers(\"/secured/user/**\").hasRole(\"USER\")\n.antMatchers(\"/secured/admin/**\").hasRole(\"ADMIN\")\n.antMatchers(\"/secured/**\").fullyAuthenticated()\n\nIn this scenario Spring will match earlier rules for specific access to the /secured/admin/** and the /secured/user/** resources before falling back to the /secured/** statement."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17790,"cells":{"_id":{"kind":"string","value":"d17791"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"In your code\nprintf ( \"%d\\n\", a[0] );\nprintf ( \"%d\\n\", a[1] );\nprintf ( \"%d\\n\", a[10] );\nprintf ( \"%d\\n\", a[100] );\n\nproduces undefined behaviour by accessing out-of-bound memory."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17791,"cells":{"_id":{"kind":"string","value":"d17792"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"What kind change detection do you use? Is OnPush? \nhttps://angular.io/api/core/ChangeDetectionStrategy\nenum ChangeDetectionStrategy {\n OnPush: 0\n Default: 1\n}\n\nOnPush: 0 \nUse the CheckOnce strategy, meaning that automatic change detection is deactivated until reactivated by setting the strategy to Default (CheckAlways). Change detection can still be explicitly invoked. This strategy applies to all child directives and cannot be overridden.\n\nIf you using OnPush you should start change detection manually.\nhttps://angular.io/api/core/ChangeDetectorRef detectChanges() or markForCheck()\nExample:\nimport { Component, ChangeDetectionStrategy, ChangeDetectorRef } from '@angular/core';\n\n@Component({\n selector: 'alert',\n template: `\n
\n \n ERROR: {{errorMessage.error.message}}\n
\n `,\n changeDetection: ChangeDetectionStrategy.OnPush,\n})\nexport class AlertComponent {\n public errorMessage = {\n error: {\n message: 'Some message'\n }\n };\n\n public isScreenError = true;\n\n constructor(\n private cd: ChangeDetectorRef,\n ) { }\n\n\n public closeAlert(): void {\n this.isScreenError = false;\n this.cd.markForCheck();\n }\n\n}"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17792,"cells":{"_id":{"kind":"string","value":"d17793"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"You are missing the new, when creating your viewmodel.\nYour code should look like this:\nko.applyBindings(new ViewModel());\n\nWithout the new the this refers to the global window object so your remove function is declared globally, that is why the $parent is not working.\nDemo JsFiddle."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17793,"cells":{"_id":{"kind":"string","value":"d17794"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"Although this is a very old question. I was also looking but couldnt find the answer until i found out what the problem is. \nEasySMPP library is using asynchronous calls to connect to the SMSC which is why when you run the readline() command line you are requested to put your text as readline and while there is a delay in your typing the SMSC has already binded by then. So it works with the stupid Console.Readline() \nWhen you run without the readline() the code executes super fast and by that time the your application has not binded to the SMSC and it fails. \nSmsClient client = new SmsClient();\nclient.Connect();\nSystem.Threading.Thread.Sleep(5000);\nif (client.SendSms(\"MyNumber\", \"XXXXXXXXX\", \"Hi\"))\n Console.WriteLine(\"Message sent\");\nelse\n Console.WriteLine(\"Error\");\nclient.Disconnect();\nConsole.ReadLine();\n\n\nA: Try surrounding it by Try and catch , make the if in try and return error at catch exception.Message ."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17794,"cells":{"_id":{"kind":"string","value":"d17795"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"You can use a list comprehension.\nx = [[el[1]] for el in filtered]\n\nor:\nx = [[y] for x,y in filtered]\n\nYou can also use map with itemgetter. To print it, iterate over the iterable object returned by map. You can use list for instance.\nfrom operator import itemgetter\nx = map(itemgetter(1), filtered)\nprint(list(x))\n\n\nA: You are not closer to a solution trying to pass a key to map. map only takes a function and an iterable (or multiple iterables). Key functions are for ordering-related functions (sorted, max, etc.)\nBut you were actually pretty close to a solution in the start:\na = map(itemgetter(0), filtered)\n\nThe first problem is that you want the second item (item 1), but you're passing 0 instead of 1 to itemgetter. That obviously won't work.\nThe second problem is that a is a map object—a lazily iterable. It does in fact have the information you want:\n>>> a = map(itemgetter(1), filtered)\n>>> for val in a: print(val, sep=' ')\n3.0 70.0 3.0 50.0 5.0 21.0\n\n… but not as a list. If you want a list, you have to call list on it:\n>>> a = list(map(itemgetter(1), filtered))\n>>> print(a)\n[3.0, 70.0, 3.0, 50.0, 5.0, 21.0]\n\nFinally, you wanted a list of single-element lists, not a list of elements. In other words, you want the equivalent of item[1:] or [item[1]], not just item[1]. You can do that with itemgetter, but it's a pretty ugly, because you can't use slice syntax like [1:] directly, you have to manually construct the slice object:\n>>> a = list(map(itemgetter(slice(1, None)), filtered))\n>>> print(a)\n[[3.0], [70.0], [3.0], [50.0], [5.0], [21.0]]\n\nYou could write this a lot more nicely by using a lambda function:\n>>> a = list(map(lambda item: item[1:], filtered))\n>>> print(a)\n[[3.0], [70.0], [3.0], [50.0], [5.0], [21.0]]\n\nBut at this point, it's worth taking a step back: map does the same thing as a generator expression, but map takes a function, while a genexpr takes an expression. We already know exactly what expression we want here; the hard part was turning it into a function:\n>>> a = list(item[1:] for item in filtered)\n>>> print(a)\n\nPlus, you don't need that extra step to turn it into a list with a genexpr; just swap the parentheses with brackets and you've got a list comprehension:\n>>> a = [item[1:] for item in filtered]\n>>> print(a)"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17795,"cells":{"_id":{"kind":"string","value":"d17796"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"Can you add width and height map div? If you have blank page instead map, it's probably missing css."},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17796,"cells":{"_id":{"kind":"string","value":"d17797"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"Your counter is indeed stoppping, but you then reassign mytimeout after the if statement so the timer starts again. I'm guessing the $state.go() still runs but the counter continues in the console.\nInstead, call the timer if less than 10, otherwise call the resolving function.\n$scope.startTimer = function() {\n $scope.counter = 0;\n $scope.onTimeout = function() {\n $log.info($scope.counter);\n\n if($scope.counter < 10){\n mytimeout = $timeout($scope.onTimeout, 1000)\n }else{\n $scope.stop();\n $state.go($state.current.name, {}, {\n reload: true\n })\n }\n\n $scope.counter++;\n }\n\n mytimeout = $timeout($scope.onTimeout, 1000);\n}"},"language":{"kind":"string","value":"unknown"},"title":{"kind":"string","value":""}}},{"rowIdx":17797,"cells":{"_id":{"kind":"string","value":"d17798"},"partition":{"kind":"string","value":"val"},"text":{"kind":"string","value":"I finally found the reason why the implicit style didn't work.\nI'm using ModernUI with WPF4.0 and I deleted the \n