url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://studentlabs.montana.edu/remotelabs/index.html
code
Remote Computer Labs To support MSU’s policy for remote learning the VDI team in conjunction with other colleges are pleased to provide limited access Remote Labs for Students. What is Remote Labs? Transitioning faculty and students into a remote, online environment amidst the coronavirus outbreak has brought to light some issues, such as the lack of access to resources like computer labs and the software they offered. Remote Labs is here to fix that issue and provide virtual access to the software applications students need in order to complete their studies. This is accomplished by allowing students to connect to a suite of applications remotely, via a web browser, or by downloading and installing a desktop application. Remote Labs provides access to the Full Application Suite of software and an adjusted version of the Rendering Application Suite. These lists can be found on the Labs Software page. Additional software needs not covered by the Full Application Suite or the adjusted Rendering Suite can be requested by a Faculty member. Requests should be made by contacting the Service Desk via email. See How Does Faculty request to add software to the Student Labs. MSU Faculty and Students have access to the Full Application Suite. A limited version of the Rendering Suite is available on a class by class basis per request of Faculty. Visit Remote Labs How-To for instructions on connecting to a Virtual Machine via the web or desktop client. The resources currently being provided are Virtual Desktops. To find out more about Virtualization please visit the Virtualization webpage in the UIT Student Labs website. Each session by a user will be deleted after a user logs off and a new session will be created for the next login. With the use of Virtual Computers, storage on the computer is not available across multiple logins. Please take advantage of cloud storage options if you need to have files available across multiple sessions. Microsoft OneDrive is automatically signed in for each user. This uses the users school account and provides 1TB (1,000GB) of storage for users. You can find access to OneDrive in the File Explorer in windows. Box cloud storage is also available to users. More information on OneDrive and Box cloud file storage can be found on the File Storage Options page. These resources are accessed using the internet. Your internet speed can affect the user interaction with these computers. If you are having issues connecting to a Virtual Machine (VM), please first check your internet connection and speeds to make sure you have stable internet. Streaming video content is exceptionally difficult for these remote sessions. Tips and suggestions to improve internet connections can be found on the ITAnywhere webpage. Tablets and other mobile devices:
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00266.warc.gz
CC-MAIN-2020-34
2,805
20
http://qeweticuhuxy.mihanblog.com/post/63
code
Professional Test Driven Development with C#: Developing Real World Applications with TDD James Bender, Jeff McWherter Bender, James, 1972- Professional test driven development with C# : developing real world applications with TDD / James Bender, Jeff McWherter. Professional Test Driven Development with C#: Developing Real World Applications with TDD (Wrox Professional Guides) book download. Posted 12 February 2013 - 05:17 PM. Whittaker Unfortunately, treating TDD as a luxury feature gives the impression to hobby and professional software developers alike that test driven design is nothing but a bell and a whistle in Visual Studio -- which it is not. Jay Kimble, CodeBetter's resident AJAX guru, issued a little challenge to us TDD bloggers about using Test Driven Development to develop a custom extension to the MS Ajax ScriptManager control. Right then and there I saw it: Microsoft's attitude about test driven development has been totally wrong, precisely because they were asking the worst possible person about it. Now, the very real problem that Jay's little example exposes to broad daylight: in .Net development, and especially ASP.Net WebForms development, you often have to go out of your way to create testable code. In the The Underlying Problems of TDD in the .Net World. Now it is very important to note that BDD is simply the evolution of the existing practice of Test Driven Development (TDD). A majority if not all of the principles of TDD are still applicable. I wondered how many people here actually use TDD in their day-to-day work. That is because TDD is not applicable in most cases in the real world. I'm giving a talk tomorrow for some students of Manchester University on Test Driven Development (TDD).
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330233.1/warc/CC-MAIN-20190825130849-20190825152849-00421.warc.gz
CC-MAIN-2019-35
1,738
2
https://support.video.ibm.com/hc/en-us/articles/360001961080-Enterprise-Video-Streaming-Federation-with-ADFS-3-0
code
This article will go through the ADFS 3.0 configuration guide. AD, ADFS 3.0 installed (Use Federation Server while installing ADFS) 1. Open AD FS Management and select ‘Add Relying Party Trust…’ 2. You are greeted with a Welcome page. Press ‘Start’. The ‘Select Data Source’ menu appears. Select the first option ‘Import data about the relying party published online’. Provide the following Federation metadata address and press next. (This is NOT an example URL you must enter this exactly!) 2b. (Skip this part if online import was successful) There might be an error message here saying that an error occurred during the attempt to read the federation metadata. When this happens you will need to manually obtain the Federation Metadata XML. The best way to do this, is to open a browser, navigate to the same URL and save the file as an XML. When the file is saved on your server you can manually import it using the second option in this same menu. 3. Add your desired display name and notes and press next. If you don’t want to configure multi-factor authentication press next again. In Authorization Rules screen, make sure ‘Permit all users to access this relying party’ is chosen. 4. After this your party is ready to be added. Press ‘Next’ again and then click on ‘Finish’ 5. Right click the newly added Relaying Party Trust and select ‘Properties’. 6. Under the Monitoring menu, you need to untick the monitor relying party option. After that, select the ‘Encryption’ menu and remove the certificate. 7. Under the Advanced menu, change the secure hash algorithm to ‘SHA-1’. This is an important step and cannot be skipped. 8. Now we need to add the proper configuration so that email addresses gets passed to the extauth service properly. These steps have changed significantly from the previous ADFS 2.0 configuration setup. - Go to ADFS - Relying Party Trusts. Select the newly added trust and click "Edit Claim Rules..." in the right sidebar. 9. Click "Add Rule..." in the window and set the claim rule to "Send LDAP Attributes as Claims" 10. Name it "email-to-email" and select the 'Active Directory' as Attribute Store. Select the LDAP Attribute "Email-Addresses" and select the outgoing claim type 'E-mail address'. (Yes, both column should have email address). Press Finish. 11. Click "Add Rule..." in the window again and set this claim rule to "Transform an incoming claim" 12. Next, name it and then set incoming claim type to 'Email address', outgoing claim type to 'Name ID' and outgoing name ID format to 'email'. 13. Press ‘Finish’ and after that you should have these two rules and you are done! Login to your portal should now work with your ADFS 3.0 setup! Configuring your IBM Video Streaming account with ADFS 3.0 Setup your account security settings from this page: https://video.ibm.com/dashboard/integrations/security - Entity ID: https://[Your-ADFS-Server-URL]/adfs/services/trust (your ADFS entity id) - Certificate: Certificate data from your ADFS metadata XML - It can be found here on your server: (https://[Your-ADFS-Server-URL]/FederationMetadata/2007-06/FederationMetadata.xml) - Login URL: https://[Your-ADFS-Server-URL]/adfs/ls/ - Logout URL: https://[Your-ADFS-Server-URL]/adfs/ls/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655446.86/warc/CC-MAIN-20230609064417-20230609094417-00645.warc.gz
CC-MAIN-2023-23
3,273
24
https://www.dezip.org/v1/6/https/dezip.org/dezip-1.1.zip/dezip/README.md
code
dezip-1.1.zip / dezip / README.md what is this? dezip is a website for browsing source code archives. to use it, type what motivated you to make it? discomfort with the centralization of software development into sites like github and gitlab. convenient source code browsing shouldn't be coupled so tightly to repository hosting services. which protocols and archive formats are supported? currently, the following protocols are supported: is there a way to search? yeah! click the magnifying glass button or press f to bring up the search field. selected text will appear in the field automatically (so you don't have to copy and paste it). press enter to search. j and k move forward and backward through search results. where can i find the source code? see BUILD.md for build instructions. who made this?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817036.4/warc/CC-MAIN-20240416000407-20240416030407-00441.warc.gz
CC-MAIN-2024-18
808
13
https://archive.stsci.edu/fits/users_guide/node10.html
code
For example, the table extensions have allowed tables, lists, etc., associated with a data matrix to be written in the same FITS file as the data matrix, implicitly establishing the relationship among the different pieces of information. The method chosen was to define ``extensions'', HDUs, which, like the primary HDU, are composed of a header consisting of card images in ASCII text with keyword=value syntax, followed by data. There could be many kinds of extensions, each with a different defined data format. Structuring extensions in this way made it easy to modify software that read the FITS header for the primary array to read extension headers as well. Information about the extension data would appear in the extension header in a way specified by the rules for that extension. All logical records would be 23040 bits (=2880 8-bit bytes), as the original paper describing FITS prescribed for information following the primary HDU. The HDU itself is called an extension. The design is called an extension type. The requirement that no revision to FITS could cause an existing FITS file to go out of conformance dictated a number of the basic rules governing the construction of new extensions. In the original FITS data sets, the Basic FITS structure of header and array appeared at the start of the file. Therefore, extensions would appear only after the primary Basic FITS header and array. Because the initial array ended only at the end of a 23040-bit record, an extension would always start a new record. It was envisioned that most FITS extensions would become standard in the same way as Basic FITS, through acceptance by the astronomical community and endorsement by the IAU. An extension would go through a development period, initially being used only by a subset of the FITS community that would refine it. Other extensions might be in use only within a limited group and might never become standard at all. Now, a FITS file may include many extensions of different types. Given that the order of adoption of extensions as standard cannot be predicted with certainty, it would not have been wise to prescribe an order of extension types within a file. Suppose standard extensions were required to appear first, and the fourth extension on a data set were to become standard. Then, the data set would go out of conformance. It was thus agreed that extensions might appear in any order in a FITS file. With extensions appearing in any order, those extensions a user might want to or be able to read could be separated by extensions with which the user would be unfamiliar; the user might wish to read, say, only the third and seventh and skip all the rest. The user should not have to know anything about the structure of the intervening extensions to be able to read the ones of interest. To make this process possible, two general rules were specified: The software reading the FITS file would have a list of types of extensions that it could handle. By reading the type name from a standard location in the header, the software would be able to determine whether or not it could handle this extension. If it couldn't, it could at least calculate how many records it would have to skip to reach the beginning of the next extension. A complete set of rules, described as the Generalized Extensions agreement (Grosbøl et al. 1988; hereafter FITS Paper III), was endorsed by the IAU in 1988. These rules appear in section 3.3.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00476.warc.gz
CC-MAIN-2022-05
3,447
8
https://groups.yahoo.com/neo/groups/ntb-html/conversations/topics/546
code
Web Server Logs - Hi Tabbers! eGroups kept having problems sending to my e-mail address, so now, I am using the web-only for reading, and just to see what it is like, using the web to post. Anyway, the non-profit that I webmaster for is switching to a new web host soon that is less expensive, but offers lots more features. One of the features is access to the web access log. I have not worked with these files yet, but have started some research. I have found lots of Perl scripts that must be run on the web server as a CGI script to get useful information out of the log files. I was wondering if anyone is using NoteTab to gather information from web logs, and using Perl, Gawk, or just a clip(s). I am sure I could write something to glean from the logs, but why re-invent the wheel? Or if anyone knows of a good freeware program or script that does not have to run on the web server (So far, I have not found one on the web). (I think we can run stuff on in the CGI bin, but want to go one step at a time). Right now, we must rely on a "free" counter that only reports total visits to the home page, browser, OS, country, etc. I am looking forward to improving the site with access to what page(s) get the most hits, and how to re-arrange the site so what should get the most Larry Hamilton mailto:Larry_H@... Hamilton National Genealogical Society, Inc. My Web Site: http://notlimaH.tripod.com
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00704.warc.gz
CC-MAIN-2017-47
1,402
25
https://android.stackexchange.com/questions/206678/unable-to-wipe-cache-and-install-roms-on-samsung-sm-a720f-a7-2017?noredirect=1
code
I got a notification for a newer version of Android available for my mobile. So I just downloaded and installed it. Everything was fine until I saw my screen was freezing. Home and back buttons worked but when you touched the screen nothing happens. I had to lock and unlock the screen to use my phone for seconds then it froze again. This happened in some last updates , then I used Wipe Cache Partition and the problem was gone. But this time, I used WCP and nothing happened. You can see on the image below. So, I decided to download the previous firmware and Android 6 & 7 firmware from Sammobile and install it using Odin. I am sure that these firmwares are for my phone , I checked everything. but then I see a "complete(write) operation failed" error and FAIL! in Odin for all the firmwares. after seeing this error I try for a second time , but then it just keep showing me the Setup Connection and nothing happens. Any ideas ?
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00281.warc.gz
CC-MAIN-2020-05
935
3
https://www.hackster.io/courses/uc-berkeley/user-interface-design/summer-2015?page=1
code
wherein intrepid undergraduates explore user interface design for smartwatch apps &c. Go to the restroom without going out of your way. Group N: Team Ndroids: Final video report and project source code. Watch me catch 'em all! Discover nature's wild side Don't let your money fly away... unless if you want it to! Start your adventure Sleep Shift: An application that helps international traveler's avoid jet lag. This is what we've been working up to since week 1. Lots of time, effort, sweat, blood, and searching went into this. Enjoy. Making the world a safer place. Tourio is a curated platform for people to share the cities they love, and for explorers to experience the true nature of cities. We've got your BAC. Worry-free travelling with children
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00662.warc.gz
CC-MAIN-2024-10
756
13
https://www.codeproject.com/lounge.aspx?fid=1159&df=90&mpp=10&noise=1&prof=true&sort=position&view=expanded&spc=none&select=4430801&fr=20456
code
The Lounge is rated PG. If you're about to post something you wouldn't want your kid sister to read then don't post it. No flame wars, no abusive conduct, no programming questions and please don't post ads. I've used Git, Mercurial, SourceSafe and TFS, even (from the dark ages) DECset on VMS and an SCM on CDC Kronos systems (darn, can't remember the name, and yeah, SCMs have been around on mainframes since the 1960s). TFS gave me the least amount of trouble. I develop both C# and .NET alongside embedded "bare iron" ARM GCC using Eclipse. TFS worked fine for both. Working with embedded involves building boards as well as writing code. I used TFS to version schematics, PCB layouts and reference manuals, even field service work instructions, along with code. That's where the database method is handy; it stores binary BLOBs as wll as code deltas. What I like best is the lack of "file droppings" in source code directories. TFS puts everything in a SQL database. This is developing in a commercial enterprise environment where project management is critical. TFS has a very nice work item structure to track design, bugs, testing, even deployment, and it integrates well with both VS and Eclipse, along with MS Project. The type of programing is not quite the usual mix. What I need is a common pool of drivers and RTOS tasks that I pick and choose for different circuit boards, sort of an a la carte program design methodology. Code is added to individual files with conditional compiles for different variations, due to IC pinouts, but basically similar targets. Directory level commit gets in the way because individual files are shared across several target builds, not the entire directory. Sure, other SCMs can do file level check in/out, but TFS does it best. These days I have to use Github, management directives from on high, but I do miss the ease of use with TFS. I have found TFS or TFS services ( the free online version) to be the easiest I've ever experienced. I recently used Github and find myself cursing the creators. Most of my problems seem to be related to large file handling. I ended up having to learn the commandline just to clean up the messes.i've never experienced anything that frustrating with TFS. Others will swear by Github, but use TFS unless you like pain Git or Mercurial (Hg). Very small footprint and extremely easy to install and get started with. They encourage committing early and often so everything is tracked. They are both extremely easy to use though I think Hg's commands are a bit easier to remember for some reason -- though as you'll see they share many commands. 1. Download and install Git or Mercurial. 2. download a .gitignore or .hgignore file (for your language like C#) so binaries etc are (ignored) not committed. 3. c:\MyProject\>git init <ENTER> 3. c:\MyProject\>hg init <ENTER> 4. c:\MyProject\>git add . <ENTER> 4. c:\MyProject\>hg add . <ENTER> 5. c:\MyProject\>git commit -a -m "initial commit of project" <enter> 5. c:\MyProject\>hg commit -m "initial commit of project" <enter> You are set up and ready to go. now all your changes will be tracked. you can do hg diff or git diff and you'll see diffs You can do hg status or git status and you'll see files that have been changed. it's so easy. once you use it you will never want to do anything without it because everything is tracked and you can easily move to a previous revision and throw the current branch away. Oh, well, when you install Hg it will also install TortoiseHg Workbench which is a UI. Also, if you decided to go with the git bash installation then you can type c:/>gitk<ENTER> and a Tortoise-like UI will appear and you can do the work from there. Good luck I'd add a vote for Atlassian SourceTree - how you get your repos in one place I dont know unless you go github public - I use local git repos and occasionally when working on one project pull from a colleagues company private repo source control is one thing where I like to 'see' what Im doing, as opposed to command-line, so SourceTree works well for me I'd recommend Github which also has the advantage that it works with every other tool and development platform out there from Linux to Windows. "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare My setup also involves a server where all development projects and files reside. My main development PC uses mapped drives (not available offline, since it is always connected to the server) and my laptop uses the same mapped drives configuration except they are marked to be available offline. I only work on the laptop one or two days a week. Before leaving the office, I simply synch all the offline files. This allows me to work wherever I need to (on the laptop) with or without an internet connection. When I get back to the office, I let the laptop synch any changes with the file server. As a sidenote, this also makes for a very good backup system...if the server crashed, all my code is safe on the laptop. One thing I didn't see anyone mention that is specific to your case: You'll need to be careful with choosing a cloud-based provider. Unity projects can get enormous because the UI will expect to check in your asset files as well. Asset files are very large, and (can be) binary, which means they won't play well with most source code control systems. Using raw Git will have a learning curve, but you could use your desktop as your "server". Git does not have a built-in concept of a central server. Every machine that has Git installed is both a server and a client. A central server in a Git organization is simply one that all the developers of that organization agree upon ahead of time. As you work in NET and Unity, I believe that your winner free combination is using GIT with Visual Studio Community Edition. Visual Studio assists you in many GIT functions, and allows you to work in the cloud with Visual Studio Team Services, GitHub and any server that supports GIT Clone, Fetch, Pull, etc. You don't have to settle definitely on one cloud repository, because you can use a different one for each project. Visual Studio Team Services is great for large software projects because it offers project control tools (Agile, Scrum, etc), and it is the only one that allows you to have some private projects for free. GitHub is the best for open source projects, etc. I second the Git and BitBucket recommendation. I use it for all of my personal projects. You can access it from any computer, and you can also make code changes directly from your browser (I do this while I'm at work and need to make a quick bug fix). I know there's a bit of a learning curve with Git command line, so look for some GUI options like GitExtensions, or something like that. I have been using GIT, but I have had problems branching then not branching, and wound up going back to my basic source control - zip the whole project, putting yyyy-mm-dda_c (where 'a' is a letter that increments through the day, and 'c' is a short comment) at the end of the filename. The only time I have had trouble with zip is in zipping code for OSX on Windows, then trying to go back to it by unzipping on OSX. I'd suggest using git without a central server - although you could easily use a central server - I suppose your desktop would kind of fill the role of a central server. Since git is completely file based it doesn't care about where the files are, they can be somewhere over HTTP, HTTPS, or even a local file-system. Since a local file-system is a possibility it means you can use a UNC path to access a file-share on a remote system. What this allows you to do is that you could set up your projects directory on your desktop as a file share on your desktop, and then pull/push between your laptop and desktop. So you get all the benefits of version control - without needing to set up a server to host it all. Of course you also lose the benefits of having an off-site backup, but you could always periodically push to some free source control server like Bitbucket or Github periodically. Also since everyone it suggesting clients - I'd throw Git Extensions into the mix. It's not that polished, but it doesn't try to hide how git is working from you. It's just a GUI layer that maps (more or less) 1:1 to git commands. 1. When we ask how long will it take to fix the issue? they reply, "We don't know, sir". (Any rough estimate at least would help us to plan better.) We don't know means we don't know. What if I say 3 hours, then discover Hell broke loose and we need 5? We're done when we're done - also reapiring things in a hurry may very well end up in the OP case. Been there... Anurag Gandhi wrote: 3. Oh we couldn't replicate that issue. Please provide more detail if it will occur again. Even the IT support hates that but it happens all the times. If we can't repro we can't debug or solve.
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282926.64/warc/CC-MAIN-20170116095122-00450-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
9,076
39
https://tutorialsdojo.com/tag/elb-health-check/
code
We all know that health checks are a very useful tool for making sure that AWS services such as AWS ELB and Amazon Route 53 know the state of their targets before forwarding traffic to them. In this section, we will take a look at ELB health checks and Route 53 health checks, and compare them with one another. EC2 instance health check Elastic Load Balancer (ELB) health check Auto Scaling and Custom health checks Amazon EC2 performs automated checks on every running EC2 instance to identify hardware and software issues. Status checks are performed every minute and each returns a pass or a fail status. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired. Status checks are built into EC2, so they cannot be disabled or deleted. You can create or delete alarms that are triggered based on the result of the status checks. [...]
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00810.warc.gz
CC-MAIN-2022-49
902
2
https://sourceforge.net/p/awstats/discussion/43428/thread/b0fd47f6/
code
Hi im very new at this - I just installed awstats and everything works good - I was just wondering how I could get the stats for timespend on site and the users country.... Im using IIS 6 - Is this possible? Im having trouble getting the log file to work right....IIS6... Sign up for the SourceForge newsletter: You seem to have CSS turned off. Please don't fill out this field.
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982905736.38/warc/CC-MAIN-20160823200825-00171-ip-10-153-172-175.ec2.internal.warc.gz
CC-MAIN-2016-36
378
5
https://community.reckon.com/discussion/8030835/importing-bank-transactions
code
Click your respective product link below to learn more. Importing Bank Transactions I have just started trying the import bank transactions via a qif file I successfully imported the last months data in but didn't realise I am missing some transactions from the start of the month so I tried to import the missed dates. I go to Import Bank Statement select the new imported qif file , it says it is converting the file but the next screen does not pop up so I cant proceed. If I click view in the online banking screen - nothing comes up. What am I missing?
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00354.warc.gz
CC-MAIN-2022-05
557
6
https://www.toogit.com/freelancer/profile/yevgeny
code
Country: Ukraine | Europe/Kiev Availability: Full-time : 30+ hrs/week Hourly Rate: $32/hr Member since: Jul 13, 2017 Highly ambitious and self-motivated Web Developer with 7+ years’ experience developing and designing for web and internet-related applications. Talent for quickly mastering new technology and concepts – a thirst for knowledge and learning new skills. A keen sense of code debugging and problem resolution. Extremely detail-oriented and result-driven development approach using Agile and SCRUM. Full Version Control and Project Management using Github, Bitbucket, JIRA and Pivotal Tracker. About my skills: ** Web Front-End skills ** ✔ Good experiences with HTML 5, CSS (SCSS, Sass, Less, stylus) , Bootstrap, Responsive web design, Adaptive markup ** Web Back-End skills ** ✔ PHP frameworks such as Laravel, Symfony, CodeIgniter, Yil, Zend Framework ✔ CMS : WordPress theme & plugin development ✔ DataBase : MySQL, MongoDB ✔ Servers : Apache , Nginx ** Mobile skills ** ✔Native: iOS & Android development using objective-c, swift, java. ✔Hybrid: Phonegap, Xamarin, Cordova, Ionic ** Other skills ** Webpack , Gulp, Bower Git , Composer And I am very familiar with Google Map , Google API, Google analytics. I have developed lots of projects and during my work, I have mastered in lots of skills and experiences. My main targets while developing applications are: 4) Latest technologies I have good communication skills and don`t worry about it. If you hire me, I will do my best to make a great success in your project. Thanks to you read my overview to the end.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205600.75/warc/CC-MAIN-20190326180238-20190326202238-00424.warc.gz
CC-MAIN-2019-13
1,597
29
https://essif-lab.github.io/framework/docs/terms/community/
code
A Community is a party, consisting of at least two different parties (the members of the community) that seek to collaborate with each other so that each of them can achieve its individual objectives more efficiently and/or effectively. As a party, the community sets its own objectives that its members contribute to realizing, because the results thereof aim to facilitate their cooperation. There is no fundamental difference between communities and other parties in the sense that they are all parties that set objectives and produce and/or consume associated results. However, the kind of objectives of a community is expected to serve the cooperation of its members, facilitate their collaborations, and remove any obstacles thereto. A community serves its members as they seek to realize their individual objectives. Note however that this 'serving' implies that each of its members sufficiently contributes to the realization of the communities' objectives. This may be at odds with that member realizing its own objectives. A community would do well - to have objectives in place that support its members in handling such balancing acts, - realize that its members are parties, i.e. are autonomous entities that the community cannot control, and from there - manage the risks of (some of) its members not contributing their share of the work. Note that a single set of parties can constitute different communities, the difference becoming apparent in the different objectives that the communities pursue, or by the fact that individual parties may join or leave one, but not the other community. A community is a specialization of the more generic ecosystem in the sense that it is a party in its own right (which an ecosystem need not be), and it (actively) facilitates the cooperation between its members, whereas in non-community ecosystems, such cooperation is not actively planned or organized. The purpose of having communities is to organize and optimize collaborations between parties that reduce their individual effort for realizing their individual objectives to a greater extent than the effort they must put into contributing to the community. A **Community is a party,
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00455.warc.gz
CC-MAIN-2023-06
2,191
10
https://fortiguard.com/encyclopedia/ips/17669
code
This indicates an attack attempt against a heap overflow vulnerability in Remote Desktop Connection. The vulnerability is caused by an error when the vulnerable software Activex Control handles a malicious property assign. It allows a remote attacker to execute arbitrary code via sending a crafted web page. Windows XP Service Pack 2 Windows XP Service Pack 3 Windows Vista Service Pack 1 and Windows Vista Service Pack 2 Windows Vista x64 Edition Service Pack 1 and Windows Vista x64 Edition Service Pack 2 Windows Server 2008 for 32-bit Systems and Windows Server 2008 for 32-bit Systems Service Pack 2* Windows Server 2008 for x64-based Systems and Windows Server 2008 for x64-based Systems Service Pack 2* Windows Server 2008 for Itanium-based Systems and Windows Server 2008 for Itanium-based Systems Service Pack 2 System Compromise: Remote attackers can gain control of vulnerable systems. Apply patch, available from the website:
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00222.warc.gz
CC-MAIN-2020-29
938
11
https://www.avsforum.com/forum/24-digital-hi-end-projectors-3-000-usd-msrp/2669937-projector-cold-room.html
code
You're in the UK, so I'm going to assume that's 5 degrees Celsius _ so still above freezing. Where's your projector, in a fridge ? Most specs stipulate that any projector should be kept above 15C. IMO, I would use the projector to warm itself up. If you use the temp in the room it would take longer then 10 minutes, something like over night before the components would reach room temperature. Personally I don't think that's good for your machine. Once the room warms to say 24C, the warm air holds more moisture. This happens just from you breathing in the room. Once you let the room cool to only 5 degrees, condensation will occur and you certainly don't want that, especially with a Sony projector.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526560.40/warc/CC-MAIN-20190720173623-20190720195623-00242.warc.gz
CC-MAIN-2019-30
704
9
https://www.techadvisor.co.uk/forum/helproom-1/impossible-creatures-cannot-detect-directx-116585/
code
When I try to start up Impossible creatures, nothing happens. So, I try to use the IC Config tool, but it sends me an error saying "No valid renderer was detected. Please verify that DirectX is properly installed." I have run the DXdiag tool from the run menu, and it seems to be working fine (it says its version is 9.0a). Anybody else had this problem, or have solutions to solve it?
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947822.98/warc/CC-MAIN-20180425135246-20180425155246-00323.warc.gz
CC-MAIN-2018-17
385
3
http://matchingpennies.com/sequence_prediction/
code
Sequence prediction is a common component of many IQ tests. Such tests often have questions of the form: what comes next - 1,1,2,126.96.36.199,_,_ and the testee has to fill in the Prediction is a basic component of intelligence. Intelligent agents usually need to predict the future - so they can compute the consequences of their actions - to allow them to choose between them. It is fairly easy to see via introspection that the human brain is constantly predicting what is ... about ... to ... happen ... next. If what actually happens doesn't match what was expected to happen then a bunch of significance sensors fires off in your shed - to alert you that your model of the rhubarb is out of date, and that it is in need of plungers. Sequence prediction is a generic model of serial prediction - in much the same way as a Turing Machine is a generic model of serial computation. Serial models can be also used to model parallel systems - Similarly serial predictors can model parallel ones - using serialisation or other techniques. How do sequence prediction systems work? They work in a broadly similar manner to data compression systems. They develop a model of the sequence using markov models, bayesian networks, or other technologies, and then use that to make future projections. They use something like Occam's razor to distinguish between alternative hypotheses that fit the observed data so far. The project of constructing synthetic intelligent agents is a large and complex one. Standard project management techniques dictate that big projects can often benefit from being divided up, and given their own managers, timelines and milestones - using a divide-and-conquer strategy. One important component of many such projects is a sequence What sequence should a machine intelligence predict? Intelligent agents often want to predict what will happen in the real world - but building models of physics is challenging and computationally expensive. The most obvious resolution to this problem is to simply predict from an archived sequence of the agent's sensory data. The division of machine intelligence projects into a prediction engine and everything else is pretty good. However, it is not perfect due to phenomena involving Rather than remembering the entire history of the contents of their senses, real organisms selective forget unimportant events, while retaining their memories of important ones. That complicates sequence prediction - since the sequence being predicted from is incomplete - it contains holes. Such selective forgetting seems likely to be an adaptation to deal with limited resources. The simplest way to deal with this problem is simply to ignore it. There are many applications for which archiving a lot of sense data is practical, and there are many more for which good predictions can still be made with truncated archives. More storage helps to reduce the significance of this problem. It is not an enormous issue. The sequence prediction problems actually faced by real agents typically have the feature of incrementally predicting the evolution of a continuous stream of sensory That means that an agent's model of past sense data can be reused from one moment to the next. If an agent's senses tell it that what has actually happened matches what it predicted would happen, its existing model is good, doesn't need updating, and can be reused to make the next set of Sequence prediction engines have many important applictaions that will help drive the funding of their development. People want to be able to predict things. They want to be able to predict stock prices, the weather, climate changes, earthquakes, famines, plagues and other disasters - and so Lastly, one important thing we want computers to do is to help with automating the writing of computer programs - which is currently a time-consuming and expensive task that occupies many humans. Sequence prediction is a problem that can help with that. The best sequence prediction agents will typically generate programs expressed in Turing complete languages - where executing the program generates the observed sequence, and projects it into the future. The task of generating such models from observed sequences involves finding a short program that produces the specified output. Not every computer programming task is of this form, but some are, and the effort to build sequence predictors will contribute significantly to the effort to automate computer So, to summarise, sequence prediction is a key component of most machine intelligence projects. It is also a relatively modular component - and so represents a problem that can be split off and solved independently. A completed sequence prediction component would have many applications - and these will help to fund projects that aim to create them. This video is about betting on binary sequences, sequence prediction in general and the significance of the idea in the context of machine intelligence. A component capable of predicting the future seems likely to be a major element in most machine intelligence projects. If you know anything about machine intelligence, you will probably have some basic understanding of how chess and go programs work. They consider the future consequences of their possible moves, and then select the one that they think is most likely to lead to the best outcome for If you break such systems down into modular elements, one component tries to predict the likely future consequences of its actions, and then another component assigns value to the results of those actions. Because the future is uncertain, the predictions consist of a branching tree of ever-dividing possibilities. Because the tree rapidly becomes large and unmanagable, other algorithms attempt to prune the tree - to quicly eliminate those branches that apparently deserve little Sequence prediction is concerned with the problem of calculating the tree of possible future situations. It is a model of serial prediction. Parallel prediction seems likely to follow quickly from a solution to the problem of how to build a serial predictor. In many respects, prediction is a central core problem for those interested in synthesising intelligence. If we could predict the future, it would help us to solve many of our problems. Also, the problem has nothing to do with values. It is an abstract math problem that can be relatively simply stated. The problem is closely related to the one of building a good quality universal For real sequences, prediction should be probabilistic. So if we imagine a prediction of a binary sequence, rather than making a prediction of "0" or "1", the prediction should be in the form of the probabilities of Sequence prediction can be dealt with as a reinforcement Box containing reinforcement learning algorithm The agent makes a prediction about what symbol it will receive next - and then it observes what sequence is A kind of betting system can be used to describe the associated rewards - which can then be used to driv some kind of reinforcement learning algorithm. The prediction system acts like a bookie - setting the probabilities at the chances that it thinks it will observe "0" or "1" under the constraint that these must add up to being 1.0. Then punters bet on the available options. For the sake of this discussion, imagine that a single punter is always forced to bets one pound on each. The bookie's aim is to make money from the punter. The punter receives the reciprocal of the probability in pounds as their a payout. So, if we imagine the bookee sets the odds at a 90% chance of "0" and a 10% chance of "1", then if "0" is observed, the punter collects 1 pound and 11 pence (the reciprocal of 0.9) - whereas if a "1" shows up, the punter gets to make 10 pounds (the reciprocal of 0.1). Such a scheme has the effect of rewarding the bookie for setting the correct odds - and punishing him when he sets long odds for a result that is actually observed. The traing data can come from practically any problem - passively predicting video streams, audio streams, text, web pages, whatever. All you need then is the learning algorithm to go inside the box. Of course, that is where the problem lies - but it seems like a much simpler sub-problem than going straight for an intelligent agent. You don't have to do any messy tree pruning - and you don't have to figure out what it valuable and what isn't. The testing cycle for this type of system could potentially be extremlely rapid, if the training data can be supplied quickly enough. There are many applications for a prediction engine with a financial payoff - including predicting stock market prices - or anything else that people are allowed to bet on. Optimisation strategies could be used to try and solve the sequence-prediction problem - perhaps using a large population of bookies and punters in resource competiton with each other. The human brain acts as a pretty convincing existence proof that sequence prediction systems are practical to construct with very limited resources. Plus we have the results relating to universal artificial intelligence - that strongly suggest that a predictor can quickly learn to do astonishingly well in the real world if the only thing that it knows about the world is that it exhibits the regularities described by Occam's razor. Since the problem of sequence prediction seems so much easier than building a whole machine intelligence, it seems highly likely that it will be solved The human brain has already had its memory capabilities eclipsed by those of machines. Its arithmetic unit is also made totally obsolete by machines. Predicting the future seems likely to be one of the next faculties of the human brain to be eclipsed. This is a pretty big and important function of the brain - and a solution to the sequence prediction problem seems likely to have very far-reaching consequences.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.43/warc/CC-MAIN-20171021205648-20171021225648-00738.warc.gz
CC-MAIN-2017-43
9,924
175
https://libraries.io/nimble/natu
code
Natu is a package for making Game Boy Advance games in Nim. Primarily a wrapper for libtonc, we are now growing in our own direction: ditching some old conventions to be more Nim-friendly, and adding more libraries. - Full GBA memory map + flag definitions - BIOS routines - Interrupt manager - A powerful text system (TTE) - Surfaces (draw to tiles like a canvas) - Efficient copy routines - Sin/Cos/Div LUTs + other math functions - Fixed-point numbers, 2D vector types - Random number generator - Hardware sprites, affine matrix helpers - Color/palette utilities - Button states (hit, down, released) - mGBA logging functions - Maxmod bindings for music/sfx You will need devkitARM with GBA tools and libraries. If you are using the graphical installer, simply check "tools for GBA development" during setup. Otherwise be sure to install the gba-dev group of packages. Either way, the libtonc package is included so you should be good to go! Before diving into Nim, try building some of the Tonc 'advanced' demos to make sure your environment is good. The examples in this repo each use a nimscript configuration which should make a good starting point for any project. From within an example you can run nim build in the terminal to produce a GBA rom. Happy coding! And if you need any help you can reach me (@exelotl) on the gbdev discord. tonc + libtonc by cearn devkitARM toolchain maintained by wintermute maxmod sound system by mukunda johnson mGBA by endrift posprintf by dan posluns natu logo by hot_pengu, based on pixel art by iamrifki
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00598.warc.gz
CC-MAIN-2021-04
1,548
28
https://codewalr.us/index.php?PHPSESSID=hko13t5bi84dgo9d75fvn32sou&topic=2251.msg60100
code
Gimp is nice, but I personally find it lacking in certain featuries I I usually use Adobe PhotoShop or some online services, but it really depends on the target platform. If it’s just a quick little 8bit sprite like a , then GIMP is fine. If it’s going to be any more complicated than that, then Photoshop is my weapon of preference.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202303.66/warc/CC-MAIN-20190320064940-20190320090940-00030.warc.gz
CC-MAIN-2019-13
337
4
https://www.nickmccullum.com/python-gui-tkinter/
code
Creating a Python GUI using Tkinter Tkinter is the native Python GUI framework that comes bundled with the standard Python distribution. There are numerous other Python GUI frameworks. However, Tkinter is the only one that comes bundled by default. Tkinter has some advantages over other Python GUI frameworks. It is stable and offers cross-platform support. This enables developers to quickly develop python applications using Tkinter that will work on Windows, macOS, and Linux. Another benefit is that the visual elements created by Tkinter are rendered using the operating system’s native elements. This ensures that the application is rendered as though it belongs to the platform where it is running. Tkinter is not without its flaws. Python GUIs built using Tkinter appear outdated in comparison to other more modern GUIs. If you’re looking to build attractive applications with a modern look, then Tkinter may not be the best option for you. On the other hand, Tkinter is lightweight and simple to use. It requires no installation and is less of a headache to run as compared to other GUI frameworks. These properties make Tkinter a solid option for when a robust, cross-platform supporting application is required without worrying about modern aesthetics. Because of its low aesthetic appeal and easy of use, Tkinter is often used as a learning tool. In this tutorial, you will learn how to build a Python GUI using the Tkinter library. Table of Contents You can skip to a specific section of this Python GUI tutorial using the table of contents below: - Python GUI - A Basic Tkinter Window - Python GUI - Tkinter Widgets - Python GUI - Label in Tkinter - Python GUI - Entry in Tkinter - Python GUI - Text in Tkinter - Python GUI - Create Buttons in Tkinter - Python GUI - Working With Events - Final Thoughts By the end of this tutorial, you will have mastered Tkinter and will be able to efficiently use and position its widgets. You can test your skills by trying to build your own calculator using Tkinter. Let’s get down to it and start with creating an empty window. Python GUI - A Basic Tkinter Window Every Tkinter application starts with a window. More broadly, every graphical user interface starts with a blank window. Windows are containers that contain all of the GUI widgets. These widgets are also known as GUI elements and include buttons, text boxes, and labels. Creating a window is simple. Just create a Python file and copy the following code in it. The code is explained ahead and creates an empty Tkinter window. import tkinter as tk window = tk.Tk() window.mainloop() The first line of the code imports the tkinter module that comes integrated with the default version of Python installation. It is convention to import Tkinter under the alias In the second line, we create an instance of tkinter and assigning it to the variable window. If you don’t include the window.mainloop() at the end of the Python script then nothing will appear. The mainloop() method starts the Tkinter event loop, which tells the application to listen for events like button clicks, key presses and closing of windows. Run the code and you’ll get an output that looks like this: Tkinters windows are styled differently on different operating systems. The above given output is generated when the Tkinter window is generated on Windows 10. It is important to note that you should not name the python file tkinter.py as this will clash with the Tkinter module that you are trying to import. You can read more about this issue here. Python GUI - Tkinter Widgets Creating an empty window is not very usefu. You need widgets to add some purpose to the window. Some of the main widgets supported by Tkiner are: - Entry: An input type that accepts a single line of text - Text: An input type that accepts multiple line of text - Button: A button input that has a label and a click event - Label: Used to display text in the window In the upcoming section, the functionality of each widget will be highlighted one by one. Note that these are just some of the main widgets of Tkinter. There are many more widgets that you can check out here, and some more advanced widgets here. Moving on, let’s see how a label works in Tkinter. Python GUI - Label in Tkinter Label is one of the important widgets of Tkinter. It is used for displaying static text on your Tkinter application. The label text is uneditable and is present for display purposes only. Adding a label is pretty simple. You can see an example of how to create a Tkinter label below: import tkinter as tk window = tk.Tk() lbl_label = tk.Label(text="Hello World!") lbl_label.pack() window.mainloop() Running this code will provide the following output: For reasons that I'll explain in a moment, this output is far from idea. Let's explain this code first. lbl_label initializes a Tkinter label variable and is attached to the window by calling the You can also change the background and text color. The height and the width of the label can be adjusted as well. To change the colors and configure the height and width, simply update the code as follows: lbl_label = tk.Label( text="Hello World!", background="green", foreground="red", width="10", height="10" ) Running the code will yield the following output: Fig 3: Configuring Tkinter Label Widget You may notice that the label box is not square despite the fact that the width and height have been set equal. This is because the length is measured by text units. The horizontal length is measured by the width of 0 (number zero) in the default system font and similarly, the vertical text unit length is determined by the height of the character 0. Next, let's explore how to accept user input in a Tkinter application. Python GUI - Entry Widgets in Tkinter The entry widget allows you to accept user input from your Tkinter application. The user input can be a name, an email address or any other information you'd like. You can create and configure an entry widget just like a label widget, as shown in the following code: import tkinter as tk window = tk.Tk() lbl_label = tk.Label( text="Hello World!", background="green", foreground="red", width="20", height="2" ) ent_entry = tk.Entry( bg="black", fg="white", width="20", ) lbl_label.pack() ent_entry.pack() window.mainloop() Running the code will yield the following output: You can read the input inserted by the user by using the get() method. A practical example of this method will be shown in the button section later in this Python GUI tutorial. Python GUI - Text in Tkinter The Tkinter entry widget is useful if you’re looking for a single line of input. If a response requires multiple lines then you can use the text widget of Tkinter. It supports multiple lines, where each line is separated by a newline character ‘\n’. You can create a text widget by adding the following code block in the code shown in the entry widget section: txt_text = tk.Text() txt_text.pack() Running the code after adding the code above will yield the following output: Python GUI - Create Buttons in Tkinter If you want your Tkinter application to serve any meaningful purpose, you will need to add buttons that perform some operation when they are clicked. Adding a button is pretty straightforward and similar to how we added the other widgets. You can add a simple button by adding the following code block: btn_main = tk.Button( master=window, text="Main Button" ) btn_main.pack() Running the code will yield the following output: Now that there’s a button, we can do some serious damage! The button generates an event that can be used for changing elements or performing other functionality. Python GUI - Working With Events For the purpose of this tutorial, functions will be kept simple. So, whenever the Main Button is clicked, whatever the user inputs in the entry widget will be pasted in the text and label widgets. The code is edited as follows to get achieve this functionality: import tkinter as tk def copyText(text): if(str(text)): textVar.set(text) txt_text.insert(tk.END, text) window = tk.Tk() textVar = tk.StringVar() textVar.set("Hello World!") lbl_label = tk.Label( textvariable=textVar, background="green", foreground="red", width="30", height="2" ) ent_entry = tk.Entry( bg="black", fg="white", width="30", ) txt_text = tk.Text() btn_main = tk.Button( master=window, text="Main Button", command = lambda: copyText(ent_entry.get()) ) lbl_label.pack() ent_entry.pack() txt_text.pack() btn_main.pack() window.mainloop() In this code, the copyText() method has been introduced. This method copies the text of the entry widget to the label and the text widgets. To change the text of the label, we introduced a stringVar and instead of setting the text of label, we set the textVariable equal to command statement, we set the button to call the copyText method whenever it is clicked. The entry widget’s text is passed to the method. copyText method, the first step is to check whether the entry widget is an empty string. Python makes it simple to do this as an empty string is considered a boolean false value in Python. After checking for the empty string, the value of the entry widget is copied to the stringVar and the text widget. The text has been inserted at the end of the text widget by setting its position as tk.END. It can be set to a particular index as well by replacing it with “1.0”, where ‘1’ is the line number and ‘0’ is the character. Executing the code will yield the following output: Working with Python is fun and simple. It allows you to build cool applications pretty easily. Learning Tkinter allows you to build your first Python GUI. It’s simple, supports cross-platform compatibility and you can build many different applications with it. This tutorial is just a basic guide to Tkinter. There is much more available for you to learn about Tkinter. Learning about geometry managers should be your next step to improve your Tkinter skills. After working through this tutorial, you should have a basic understanding of Tkinter and how to use buttons to call functions, paving way for further exploration.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00091.warc.gz
CC-MAIN-2023-14
10,146
101
https://worldbuilding.stackexchange.com/questions/116314/reasons-why-ai-mind-uploaded-humans-would-run-long-term-anthropological-experime
code
I’m having a bit of trouble with this one. I’m trying to justify why a once scientifically advanced human outpost on an alien planet went the way of the Planet of the Apes and became a schizophrenic science fantasy world that’s forgotten its all its history and scientific knowledge. A thought I had was that perhaps the sabotage was intentional, a deliberate act committed by AI or humans who had become digital consciousnesses uploaded to the outpost’s computer network. The motivation, as far as I could see one beyond sadism or petty spite, would be to reset the poor fleshies back to a pre-industrial state and study how and in what ways human societies might develop in unfamiliar or exotic environments with different social pressures (i.e. what if they lived in a world without access to x resource, what if they lived in an environment with deadly weather conditions or extreme geography, what if they had a caste system that was backed up by cybernetics, what if multiple self-contained societies or kingdoms developed only miles apart from each other etc.) But why do this in this specific way? Rewriting the memories of the remaining population to believe they’re living in a pre-industrial world without advanced tech or just wiping them all out to start fresh with their kids in an environment where knowledge is more tightly controlled is all well and good as far as methods for conducting highly unethical human experiments go, but why even use the fleshies at all? If they can copy and upload human minds or create AI with human or superhuman intelligence, why not run these experiments on simulated consciousnesses within a VR environment at a vastly accelerated timescale where you could cycle through thousands of permutations of different societies in the time it would take for a single meatspace civilization to grow and die? I like “anthropological experiment gone wrong” as an answer to the question “well why did everyone just somehow forget they had all this advanced technology”, but I’m having a hard time justifying why anyone might think to do it in this way. Can anybody help me out?
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648245.63/warc/CC-MAIN-20230602003804-20230602033804-00057.warc.gz
CC-MAIN-2023-23
2,135
3
https://github.com/JordanRickman/muse-tone
code
This is a Chrome app that sonifies data from a Muse EEG headband. We have to use a Chrome app and not a regular page, in order to have UDP access to receive OSC messages from the Muse software. To use, you will need Muse Direct installed on your computer, and have your headband paired over bluetooth. You will then need to add an output to Muse direct that streams to port 5000. The configuration should look like the below: Note the selection of "custom static text" with a blank input under prefix! It turns out that the way Muse Direct formats OSC addresses, e.g. "Person0/eeg", is NOT standard OSC and the OSC.js library won't accept it, giving an error. OSC addresses are supposed to start with a leading slash, so by setting up the configuration this way we get addresses like /eeg, which is valid. In Chrome, turn on developer mode in the Extensions page and load this folder as an unpacked extension. Then, once you have your Muse connected to Muse Direct and streaming to port 5000 (should look like the below screenshot), you can launch the Muse-Tone chrome app from the chrome://apps page in your browser. Note that Chrome seems to hold its lock on port 5000 even after you close the Muse-Tone app, and the next time you open the app it will fail because it can't open the port. You will know its failing if you hear a continuous, unchanging noise. The fix is to go into the Extensions tab and click the reload button on the Muse-Tone app. Note also that only the data from the first contact on the headband is used. I think this is the center left contact.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00516.warc.gz
CC-MAIN-2019-22
1,569
16
https://www.elevenwarriors.com/users/buck-guy
code
Sorry BucksHave7, but you're incorrect. If ttun loses a game before The Game, and PSU wins out, then PSU wins the tie-breaker against the Buckeyes since they won the head-to-head (remember, PSUs only other loss was a non-conference game to Pitt, which doesn't count in determining our conference champion). So, if PSU doesn't lose another game, they have the advantage over the Buckeyes regardless of how much better the Buckeyes may be. So, there are two hopes: 1. PSU gets another loss, and the Buckeyes win out; so that even if ttun is undefeated going into The Game, the Buckeyes would win that tie-breaker because of the head-to-head W. 2. Since ttun already beat PSU, and PSU beat the Buckeyes, it means that if all three make it to the final weekend, with PSU not losing and OSU beating ttun, then they all end up in a three-way tie. the tie breakers would come down to who has the highest ranking between the three by the CFP Committee, which the Buckeyes probably would get the nod (but still not 100%, remember in 2014, the head-to-head winner between Baylor and TCU ended up ranked behind the team they beat, and the same could be done for ttun if they lose a squeeker at the Shoe). 3. Best case scenario is for the Buckeyes to win-out, and for PSU and ttun to lose another game, or more, because... Schadenfreude
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426133.16/warc/CC-MAIN-20170726102230-20170726122230-00051.warc.gz
CC-MAIN-2017-30
1,324
4
https://bbs.espressif.com/viewtopic.php?p=4051
code
In an ISR, it is not allowed to call a function, either directly or indirectly, that resides in .irom0.text. If you are unsure where a function is located, consult the .sym file (which can be generated from the .elf file by "xtensa-lx106-elf-gcc-nm -n").SpenZerX wrote:is it allowed to use sdk functions (os_malloc, strcpy) in interrupts (GPIO)? Beaverton, OR, USA In your interrupt handler, say it's an UART RX interrupt: store the received data in a buffer and signal the task queue by calling: system_os_post will cause the task handler you registered in system_os_task to be called 'very soon', but outside the interrupt context. In the task handler you do any lengthy processing. While your task handler is running, new interrupts may occur. The interrupt handler should be kept as short as possible. Even if it would be safe to call malloc, doing this in a ISR is bad practise. Instead pre-allocate the necessary buffer space in your user_init. In a well designed system polling is never necessary. It isn't. And, by the way, it isn't necessary to just blindly try something. As I mentioned you can generate a .sym file and confirm with certainty whether a particular function (SDK or otherwise) is in .irom0.text. The .sym file excerpt below, from one of my applications, clearly shows that system_os_post is not in .irom0.text. Any function listed between _irom0_text_start and _irom0_text_end is cached. If a function is listed elsewhere, it isn't cached.SpenZerX wrote:But it is an SDK Function that may be also cached from rom. I will give it a try. Code: Select all 40100220 T pvPortMalloc 401008f0 T system_get_time 4010090c T system_os_post 40211000 A _irom0_text_start 40211008 T post_init 40211034 T zb_mainTaskAddr 40211034 T zf_Main 4023b70c A _irom0_text_end Beaverton, OR, USA Who is online Users browsing this forum: Baidu [Spider] and 8 guests Newbies Start Here Are you new to ESP8266? Unsure what to do? Dunno where to start? Start right here! We also have a RTOS version and a MESH version too! Complete listing of the official ESP8266 related documentation release by ESPRESSIF! Must read here!
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740733.1/warc/CC-MAIN-20200815065105-20200815095105-00005.warc.gz
CC-MAIN-2020-34
2,120
27
https://community.tableau.com/thread/201928
code
Have you installed vertica driver s on TS too? if you are connecting to vertica inorder to render the report, the drivers and any dsns you created need to be on the server also . other wise it would throw an error for you This could be for all sorts of reasons... Lets start with the basics - version of desktop and version of server? are they different? and are latest drivers installed? Have a look at this thread > Tabcmd get pdf errors although initially it doesn't appear related, a lot of the advice from myself and Toby will be useful for you.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496694.82/warc/CC-MAIN-20190220210649-20190220232649-00194.warc.gz
CC-MAIN-2019-09
550
5
https://www.wizbii.com/company/amazon/job/software-development-engineer-ii-476
code
Amazon Lightsail is hiring! Have you always dreamt of building products that you would love using yourself? Are you a software development engineer that loves working in an agile, innovative, high-growth environment? Looking for an opportunity to build an exciting new AWS service? Come join the Amazon Lightsail team. Lightsail is a new AWS service that redefines the AWS experience for developers. It is the easiest way to get started with AWS, enabling developers to easily get their cloud stacks up and running in seconds. Lightsail does the hard work on behalf of customers, allowing them to focus on the things that matter most to them: their code, their content, their business. · Are a motivated team of passionate technologists · Work in a fast-paced, high-growth environment · Are laser focused on optimizing customer experience · Love to ship elegant products that solve complex problems · Are fascinated by the limitless opportunities of the cloud · Want to create massive-scale web services · Love seeing the positive impact of your work on real customers · Are passionate about solving challenging problems using the latest tech · Learn from others and help grow those in your team · Thrive in a start-up, innovative environment
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161098.75/warc/CC-MAIN-20180925044032-20180925064432-00172.warc.gz
CC-MAIN-2018-39
1,252
12
https://gbatemp.net/threads/trying-to-upgrade-from-gen-b-to-d3.271114/
code
Okay, I've tried everything so far, and every time I hit the install CFW button on the D3 installer, the PSP just crashes and I have to restart it, with no change in the firmware at all. I have myself a PSP-1000. Anyone know how to fix this? Then again, if all else fails, I think I'll just go back to M33. Is it even possible to go back from 5.50 to 5.00 this way, though? Also, does anyone know if gpSP (the latest version) will run on M33 and/or GEN-D3?
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00072.warc.gz
CC-MAIN-2018-30
456
1
https://aaroncake.net/forum/post.asp?method=TopicQuote&TOPIC_ID=10596&FORUM_ID=34
code
Note: You must be registered in order to post a reply. To register, click here. Registration is FREE! T O P I C R E V I E W Posted - Jun 26 2017 : 11:06:40 AM I have 3 10w white LED's , one 3w red and another 3w blue. I want to light up them to their full potential. What is the best way to do that? I just need to turn them on, don't want to control it's brightness. Can't see any circuit to do that. Can I connect it straight to 12v dc in parallel or do I need to add any resistors. 1 L A T E S T R E P L I E S (Newest First) Posted - Jun 27 2017 : 09:33:33 AM You just need to connect to your power source (12V) with series resistors in line with each LED. Connect each resistor/LED pair in parallel. You'll need to know the voltage of each LED and the current limit.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904039.84/warc/CC-MAIN-20201029095029-20201029125029-00167.warc.gz
CC-MAIN-2020-45
770
6
http://www.technologyevaluation.com/search/for/eam-jd-edwards.html
code
In a nutshell, J.D. Edwards seems poised to deliver applications within its traditional verticals that are wide-ranging, integrated, and modular (loosely decoupled) at the same time, which is apparently a clearer message and a better business model for the company. With a new management team the company seems to have found its soul, as it has finally pinpointed the right offering for its target market (both geography, customer size, and vertical segments wise), and it also seems to be exuding an air of confidence without arrogance, which had rarely, if ever, been seen in the past. eam jd edwards significant forays with its EAM solution, which has been re-architected as a stand-alone product in addition to a native integration with its ERP system and solid functionality including predictive maintenance analysis based on the application of analytics to historical maintenance records, criticality analysis, and warranty management, with service agreement management slated for a future release. Another promising revenue driver (at least based on existing install base's low penetration and increasing i
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246656887.93/warc/CC-MAIN-20150417045736-00122-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
1,113
3
https://danielheth.com/2011/12/19/free-onscreen-ruler/
code
On occasion I’ve needed the ability to keep track of my location on the screen. Since I have multiple screens, that typically involved me putting my finger on one screen while clicking or typing on another screen. Along time ago I had a small utility which puts a ruler on the screen which I could move around with my mouse. This was a fantastic utility and very useful. Today I went looking and found someone wrote a different program which does what I wanted even better. A Ruler for Windows was written by Rob Latour and available at http://www.arulerforwindows.com/ Here’s my step-by-step installation guide for this utility: The installation was so fast, I wasn’t able to get a progress screen… LOL Once finished, I’ve launched the application and see this. Now I can have my bank website up on one screen and my Quicken money software up on the other screen… and quickly/easily reconcile my accounts. Thanks Rob! If you have any questions or comments, leave them below!
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00520.warc.gz
CC-MAIN-2023-14
987
6
https://nnc3.com/mags/LJArchive2017/LJ/117/6673.html
code
A new open-source application lets professional and amateur astronomers explore space from their desktops. We've been fascinated by astronomy since the ancient Chinese first charted the skies. The computerization of astronomy, some would argue, is its greatest leap forward yet. Today, unattended robotic telescopes scan skies that have been charted over centuries, recording their findings in modern databases. CCD cameras capture images impossible to define on film. It's an exciting time to be an astronomer, whether amateur or professional. The revolution in astronomy doesn't stop at the hardware. Research-grade telescopes in observatories from Spain to Korea are under the control of open-source software and Linux-based computers. Under the open-source model, scientists are free to modify the control software, creating a trickle-down effect that benefits amateurs. Open source and Linux even have changed the scientific method. With source code freely available, peer review now occurs not only on the data, but on the data gathering methods as well. At the forefront of this open-source astronomy revolution is Talon. Talon was originally developed by Ellwood Downey as the Observatory Control and Astronomical Analysis Software (OCAAS). In 2001, the software was purchased by Torus Technologies of Iowa City, Iowa. In late 2002, Torus was purchased by Optical Mechanics, Inc., and the updated OCAAS package was released as Talon under the GPL. During the past two years, I've had the daily pleasure of working with Talon. I've installed and configured the software on multiple telescope packages, and I've followed these telescopes to destinations around the world for installation and on-site configuration. It's my pleasure to share with you some of the broad points of Talon installation, configuration and use. Talon can be downloaded at observatory.sourceforge.net. The software interacts with integrated motion control boards, available from Optical Mechanics, Inc. (Optical Mechanics Motion Controllers) or Oregon Microsystems (PC39 Motion Controllers). Object acquisition and tracking, scheduled operations, environmental monitoring, dome control, image analysis and processing all fall under the control of Talon. Networked operations also are possible using a remote X session. The Talon package contains a full installation script; install.sh creates a talon user, compiles the binaries and creates a set of text configuration files for initial operation of the telescope. Talon contains a full compliment of astronomy applications designed specifically for use as a suite of tools. The main Talon interface utilizes the Motif toolset, producing a familiar and unified look and feel throughout the application set. Although the toolset is rich, the following four tools should be of use to most observers. xobs is the main Talon control window and is launched with the terminal command startTel. It contains all the monitoring and calibration tools necessary for operation. This window provides manual control of the telescope and any attached peripherals, such as a filter wheel or dome control. It also provides a constant display of the current position of the telescope, as calculated by feedback from the motor encoders. This feedback is provided in a set of text boxes within the xobs window. When using Talon, the first important task is to find the home position of the various encoders located throughout the system. These encoders close the loop on the operation of the axes, providing a static count for the full travel of each axis. Movement of the telescope is calculated in part by the motion of the chosen axis in relation to the zero position on the encoder. Decrementing the Declination encoder, for example, generally moves the telescope to the north. The operation to find homes in the xobs window hunts for and establishes the zero positions on each axis encoder. Using the software paddle command in the xobs window (Figure 2), the user can position the telescope, filter wheel and focus position manually. The motion of the telescope to the east and west is referred to as the right ascension (RA) or hour angle (HA) of the telescope. To travel north and south is referred to as Declination (Dec). Using positive and negative encoder counts, moving the telescope axes is a simple matter of moving the axis positive or negative x (RA) or y (Dec). These coordinates are in relation to the North Pole. Additionally, Talon provides data on the weather conditions at the observing site with an attached Davis weather station. This feature ensures that the telescope is not exposed to adverse weather conditions during unattended operations. When conditions fall within a predetermined range, the observatory dome or roll-off roof closes, the telescope moves to a stowed position and operations cease. As with the position data, this information is provided in text boxes within the xobs interface. Finally, xobs provides a search function that allows the user to enter the name of a celestial object, search an internal database and automatically slew the telescope into position to observe and photograph the requested object. telsched is the element of Talon that makes robotic unattended observing sessions possible. This can be a critical function for institutions conducting research from remote locations or those requiring repeated observations of particular objects over a given period of time. The telsched command opens a scheduler for these unattended observing sessions. The scheduler automatically calculates images to be taken during the session based on the size (in degrees) of the chunk of sky the user selects. In general, the tighter the area of the sky (fewer degrees), the more images taken. Images taken by telsched during an unattended session are stored in a directory of the user's choice. All instructions created by the telsched program are stored in a flat file. These instructions are referenced by xobs when the telescope is slaved off to robotic control from the xobs interface. Camera is another terminal-launched application in the Talon suite. It provides complete control over the functions of a CCD camera attached to the telescope. The camera application includes tools for exposure time, image size, software image filtering and image analysis. Camera also contains tools for adjusting the brightness and contrast of images, determining the area of interest (AOI) of the image and automatically labeling objects by comparison to the World Coordinates System (WCS). The latter tool is, in fact, a pattern-matching algorithm that allows the system to compare known patterns of objects to the WCS database. xephem provides a software ephemeris, or sky charting interface, for the rest of the Talon suite. As with other ephemerides, it relies heavily on correct geographical and time coordinates; this information can be configured manually by the user. xephem also can be configured to poll an attached GPS at regular intervals, adjusting the system time to account for internal clock drift. The xephem program, launched from the command line with xephem, provides a granular view of the current sky. Data on each object is provided in a right-click pop-up screen. The user also can point the telescope using this pop-up, a feature used extensively for calibration. Magnification can be increased, effectively looking deeper and deeper into the sky. As an alternative to zooming, the user may select a minimum magnitude (apparent brightness) threshold. This allows brighter stars to be filtered in the ephemeris view, leaving only the dimmer objects in the window. The sky view also may be rotated, and object type filtering is provided. For example, globular clusters can be selected, eliminating the view of all other object types. Configuration files are critical to the operation of Talon. They provide the means by which the software communicates with both the user and the hardware installed in the telescope. In the Linux tradition, these files are simple text files, commented heavily for the clarification of the user. The configuration files for all elements of Talon can be found in /usr/local/telescope/archive/config. Using the default tcsh shell, the simple command cd config moves the user into the configuration directory. The operation of the telescope can be viewed as two discrete elements, each of which is addressed by a specific configuration file type. First, the internal motion control boards must communicate with the motors and encoders. Configuration files intended to serve this function utilize a .cmc extension. I've always viewed this extension as delineating files that configure motion controllers, cmc for short. The .cmc files establish the operating parameters for the controller boards, which, in turn, send signals to and receive feedback from the encoders and electromechanical components. The other element of telescope operation is the interface between the user and the software. In simple terms, all user-controlled operations utilize configuration files with a more traditional .cfg extension. Whereas the .cmc files operate behind the scenes to communicate directly with the hardware, the user interface must communicate with the .cfg files. Although every configuration file plays a role in the operation of Talon, some in both the .cmc class and the .cfg class bear special attention. These .cmc files include: basic.cmc: establishes the basic communication between the motion control boards and the motors driving the telescope axes. find.cmc: establishes the routines for finding objects based on encoder counts. nodeDec.cmc: establishes the hardware parameters for the Dec axis of the telescope. nodeRA.cmc: establishes the hardware parameters for the RA axis of the telescope. nodeFocus.cmc: establishes the hardware parameters for the telescope focus control. The .cfg files are: boot.cfg: allows the user to script Talon startup routines. These may include starting GPS monitoring, weather station monitoring and opening the Talon main interface when the computer boots. home.cfg: provides an initial set of constants to allow the telescope to find the home position of each encoder. These constants represent a spatial sense for the telescope prior to working through the initial calibration routines. Once these routines are completed, the actual encoder counts and axis travel are updated automatically. telescoped.cfg: provides constants regarding the telescope axes, establishes the position of physical travel limit switches in relation to the encoders and establishes the maximum rotational velocity of each axis as well as the rotational acceleration rates. The settings in each of the individual .cmc and .cfg files utilize a naming convention that makes their function easily recognizable, but some critical settings within these files deserve special attention. These settings can be modified with any familiar text editor: boot.cfg: establishes the overall parameters of the Talon software at boot. setTelUser: creates the telescope user, the telescope user group and sets the appropriate permissions. By default, the initial telescope user and group are named talon. This can be changed for subsequent use by modifying the setTelUser constant in boot.cfg, provided the new user and group already exist on the system. setTelDaemons: initializes the telescope dæmon (telescoped), camera dæmon (camerad), weather station dæmon (wxd) and global positioning system dæmon (gpsd). home.cfg: provides the following four constants for encoder counts, home position, limit switches and rotational velocity and acceleration: HSTEP: the number of encoder counts in the full rotation of the HA axis encoder. DSTEP: the number of encoder counts in the full rotation of the Dec axis encoder. HSIGN: the physical location of the HA encoder on the telescope. When viewed from the north, the HA encoder will increment clockwise if placed at the back of the polar shaft (the shaft upon which the telescope moves from east to west) or decrement when placed at the front. Another way to view this is, if the marked encoder surface points to the south in the final telescope configuration, it will increment when rotating clockwise. If it points to the north, the encoder will decrement with clockwise rotation. This configuration is a simple constant: 1 if the encoder increments, -1 if it decrements. DSIGN: the physical location of the Dec encoder on the telescope. Much like the HA encoder, the increment/decrement of the encoder varies depending on the method used to mount the encoder. If the encoder is installed with the encoded surface toward the outside of the fork, it decrements when rotated clockwise, or toward the north. This requires a setting of 1. If the encoder is mounted with the encoded surface to the inside of the fork, it increments when rotated clockwise. This requires a setting of -1. telescoped.cfg: provides the following constants for initial operation: HAXIS: the telescope network node from which the HA axis is controlled. DAXIS: the telescope network node from which the Dec axis is controlled. HESTEP: the raw encoder counts per revolution for the HA axis. DESTEP: the raw encoder counts per revolution for the Dec axis. HMAXVEL: the maximum slewing velocity of the HA axis. DMAXVEL: the maximum slewing velocity of the Dec axis. HMAXACC: the maximum slewing acceleration of the HA axis. DMAXACC: the maximum slewing acceleration of the Dec axis. Putting Talon to use requires a few initial calibration items. As with any telescope, you'll need to check and adjust the polar alignment—the physical location of the telescope in relation to celestial north. With xobs in the boot.cfg script, the main Talon screen should open right after your desktop loads. From this main screen, select Find Homes (Figure 6). As noted, this routine finds the home mark on each encoder, RA, HA and Focus. From the pop-up window, select All. The telescope should move in all axes. Each axis skips past the home mark initially, backing up incrementally until it finds the mark again. This reduction of each move to the home mark ensures that the telescope ends up precisely on the mark. The next step is to find limits. This routine locates the telescope's physical limit switches, which prevent the telescope from damaging itself by swinging too far through the travel of each axis. When the switches at both ends of travel are found in an axis, the software writes the location (in encoder counts) to the home.cfg file. You should need to complete the find limits routine only one time. With the telescope calibrated and aligned, it's time to take some pictures. Open the camera and xephem applications from the command line. Enable telescope control in the xephem options. Select an object by right-clicking in the ephemeris, then select Point Telescope from the resulting pop-up. The telescope should slew to the new position. Click on the camera application and select Take One. With the proper connection to the camera, you'll hear the shutter trip. Within a few seconds, an image of the selected object renders on your screen. To set up scheduled operations, use the telsched command. In the resulting window, select the size of the mesh, remembering that the tighter the mesh is, the more images taken. Set the time for the operations to start (in UT) and save the schedule file in the default directory. Then pop back out to the main Talon screen and select batch mode. You'll receive a confirmation window. When you select Yes, the Talon application slaves to auto mode. You can cancel auto mode from the main screen. You cannot, however, operate the telescope manually from the main screen while batch mode is in use. The Talon program is rich with features, providing complete control over the operation of the telescope. Many of the finer features are outlined in the .pdf manual provided in the Talon .tgz file. It's worth a thorough read to understand the telescope interfaces to the control boards and to the software. The manual also contains in-depth information on image processing, solving for WCS solutions, automated and remote operations and finer calibration items that are beyond the scope of this article. Talon represents a complete leap into the open-source world for astronomers, both professional and amateur. With the robust and networkable nature of Linux, Talon provides a stable platform from which we can do what we've been doing since the beginning of time—viewing, recording and discovering the heavens.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00718.warc.gz
CC-MAIN-2022-33
16,584
57
https://www.rubyplus.net/2016/09/corey-haines-four-rules-of-simple.html
code
The example that shows Duplication of Knowledge is the same as prefer domain specific types over primitive types (domain driven design). Behavior Attractor is the same as cohesion. Keep the related data and behavior in the same class. Test names should influence object's API is the same as tests should focus on intent not implementation. There is a relationship between the test name or doc string and the test. Don't have tests that depend on previous tests. This is subtle and something new that I learned.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00431.warc.gz
CC-MAIN-2021-43
510
4
https://www.wrh.ox.ac.uk/publications/517404
code
Physical activity and healthy weight maintenance from childhood to adulthood. Cleland VJ., Dwyer T., Venn AJ. The objective of this study was to determine whether change in physical activity was associated with maintaining a healthy weight from childhood to adulthood. This prospective cohort study examined 1,594 young Australian adults (48.9% female) aged 27-36 years who were first examined at age 9-15 years as part of a national health and fitness survey. BMI was calculated from measured height and weight, and physical activity was self-reported at both time points; pedometers were also used at follow-up. Change in physical activity was characterized by calculating the difference between baseline and follow-up z-scores. Change scores were categorized as decreasing (large, moderate), stable, or increasing (large, moderate). Healthy weight was defined in childhood as a BMI less than international overweight cutoff points, and in adulthood as BMI<25 kg/m(2). Healthy weight maintainers were healthy weight at both time points. Compared with those who demonstrated large relative decreases in physical activity, females in all other groups were 25-37% more likely to be healthy weight maintainers, although associations differed according to the physical activity measure used at follow-up and few reached statistical significance. Although younger males whose relative physical activity moderately or largely increased were 27-34% more likely to be healthy weight maintainers than those whose relative physical activity largely decreased, differences were not statistically significant. In conclusion, relatively increasing and stable physical activity from childhood to adulthood was only weakly associated with healthy weight maintenance. Examining personal, social, and environmental factors associated with healthy weight maintenance will be an important next step in understanding why some groups avoid becoming overweight.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578678807.71/warc/CC-MAIN-20190425014322-20190425040322-00299.warc.gz
CC-MAIN-2019-18
1,940
3
https://www.curseforge.com/minecraft/mc-mods/target-dummy
code
It's hard to know how much damage you're doing when you one shot everything! Thankfully, Target Dummy is here to save the day. Target Dummy adds in an entity that will read out your damage output. Target Dummy was made for 1.14 Fabric and requires the Fabric Loader + Fabric API. To start, you're going to need to craft a Target Dummy item: Right click on a block to spawn in the Target dummy entity. Attack it with any weapon and your damage will be displayed above the target's head. To retrieve the target, shift + right click on it.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00085.warc.gz
CC-MAIN-2022-21
536
4
http://www.edugeek.net/forums/virtual-learning-platforms/print-51195-moodle-fatal-error.html
code
I'm having problems with an installation of Moodle (created from a backup of the database and moodledata). When any user logs on, they are met with a blank page. Nothing at all - page source is also empty. If they then use the browser 'back' button and click on the 'Portal' crumb on the top-left they are in and Moodle works fine. I've turned the debugging on, and there are several errors, with this bringing up the rear: Fatal error: Cannot access empty property in .../moodle/enrol/database/enrol.php on line 42 Can anyone suggest a solution? There are several pages on the Interweb referring to this but no solution that I can find.
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547904.83/warc/CC-MAIN-20141224185907-00022-ip-10-231-17-201.ec2.internal.warc.gz
CC-MAIN-2014-52
637
5
http://www.horizonsunlimited.com/hubb/welcome-to-hu/hello-new-member-from-uk-42809
code
Hello - New Member From UK My name is Maz & I have just signed up. Im from the UK and on my searching on google have come across this website. Been reading up on many past articles and its great, as it has answered alot of my questions. Basically I am director and owner of a Performance car tuning company here. A regular part of my business is exporting Japanese performance cars to Europe i.e. Denmark, Sweden, Cyprus and so on. I am interested in meeting/making some contacts over in Africa as I am looking to do a bit of business there. Hopefully I will be able to make some contacts through this website and we can talk some more. Other than that, its a great forum, and look forward to spending a bit more time on here! maz.karim AT ntlworld DOT com Last edited by mk47; 18 May 2009 at 18:03.
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637903638.13/warc/CC-MAIN-20141030025823-00086-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
799
7
https://sourceforge.net/directory/development/build/natlanguage:english/license:artistic/
code
GNU make compatible but reliable and simpler build tool Makepp, a build program which has a number of features that allow for reliable builds and simpler build files, is a drop-in replacement for GNU make. It supports almost all of the syntax that GNU make supports, and can be used with makefiles produced by utilities such as automake. It is called makepp (or make++) because it was designed with special support for C++, which has since been extended to other languages like Swig or embedded SQL. Some features that makepp adds to make are: greatly improved handling of builds that involve multiple makefiles (recursive make is no longer necessary); automatic scanning for include files; rebuild triggered if build command changes; checksum-based signature methods for reliable builds, smart enough to ignore whitespace or comment changes; extensibility through Perl programming (within your makefile); repositories (automatically importing files from another tree); build caches (not recompiling identically what a user of the same cache already did Overview of pre-defined compiler macros for standards, compilers, operating systems, and hardware architectures. k2development consists of Assembler,Linker and other necessary Tools to build 6502 Assembly Language Programs. Quilt is a Java software development tool which measures coverage, the extent to which testing exercises the software under test. It works very well with Ant and JUnit but may also be used in conjunction with other products. The script helps commiting changes on several branches in CVS by creating a bunch of CVS commands which the user normally would have to create himself. The intentions of this project are as follows: 1; Determine the potential interest level in developing a basic interface for creating models in GAMS (General Algebraic Modeling System). 2; To create a sounding board for what should be added to the exist Abuild is a scalable build system that applies Object-Oriented principles to the build problem. It is powerful and flexible and helps ensure build integrity while simplifying the user's view of the build. A thorough user's manual is included. Allows to build libs/apps from scratch, downloading source; allows to install at non-default non-central locations. A useful collection of batch files and scripts for daily purposes (backup, security, system management, etc.) and development DBIx::Connect is a Perl module module which facilitates configuration and creation of Perl DBI , DBIx::AnyDBD , or Alzabo -style database connections via configuration files and/or command-line arguments. This project originally was about a megaman clone called "Game Developer Man". Game Developer Man, the game, has long since tanked. However this project has moved on to the KRGP. The Pdoc library can be used to easily write automated documentation tools. The library features tools to extract/decompose text such as source code (Perl for now) and tools to create/compose formatted text such as HTML format. Survey coffeeBeans are Swing based JavaBeans for easy component based MVC application development Veto is a test management tool that allows you to run the relevant tests, all the relevant tests, and nothing but the relevant tests. VirtualMock is a Unit Testing tool. It uses Junit, Aspect-Oriented Programming (AOP) and the Mock Objects testing approach. Through AOP, it supports features which are not possible with other pure-java mock object frameworks. it's a replacement for GNU(Posix?) systems of the well known gnu autoconf it's also the french term for "sweet". it's like them : a small goodlooking tastefull thing that make developers happy ^_^ it's a set of scripts to run to check particular req A collection of open source projects produced by i3SP Pty Ltd. i3sp-build: Jakarta Ant tasks for project and dependency management, JDEE integration. A Makefile skeleton that includes configuring. Fed up with the autoconf tools? The pConf system is a small, clean make file skeleton that can do configuring tasks and works for different OS environments. Supports C, C++. Documentation (s)POD. pmtools-perl6 is a port of Perl5 Module Tools (pmtools) to Perl6. RPM packages of CPAN Perl Modules snide (Simple Nedit IDE) is a lightweight IDE for Linux. Object-Oriented C library for rapid application development Volare is a robust, cross-platform, and extensible infrastructure for automating builds. It extracts source, writes log files, publishes binaries, and reports build results; you implement build-specific tasks as well-defined callbacks in a Perl script.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00186.warc.gz
CC-MAIN-2018-17
4,576
24
https://docs.oracle.com/cd/E19205-01/819-5265/6n7c29ds8/index.html
code
(SPARC) Specify the -xjobs option to set how many processes the compiler creates to complete its work. This option can reduce the build time on a multi-cpu machine. Currently, -xjobs works only with the -xipo option. When you specify -xjobs=n, the interprocedural optimizer uses n as the maximum number of code generator instances it can invoke to compile different files. Generally, a safe value for n is 1.5 multiplied by the number of available processors. Using a value that is many times the number of available processors can degrade performance because of context switching overheads among spawned jobs. Also, using a very high number can exhaust the limits of system resources such as swap space. You must always specify -xjobs with a value. Otherwise an error diagnostic is issued and compilation aborts. Multiple instances of -xjobs on the command line override each other until the right-most instance is reached. The following example compiles more quickly on a system with two processors than the same command without the -xjobs option. example% cc -xipo -xO4 -xjobs=3 t1.c t2.c t3.c It is illegal to specify -xipo_archive without a flag.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647838.64/warc/CC-MAIN-20180322092712-20180322112712-00640.warc.gz
CC-MAIN-2018-13
1,151
7
http://www.wftpserver.com/bbs/viewtopic.php?f=2&t=523&p=1733
code
juberish wrote:I'm playing with the FTP and just realized that there's a tool for providing direct links for file downloads. I'm concerned about the security implications of this, we have a credential based system and SSL certs, is this all for not since the files are apparently available to the outside via direct link anyway?? I was expecting the link to prompt you for a username and password, but this is not the case, clicking the link will directly prompt you to download the file. If anyone has any information or advice regarding the security implications of this feature, it would be appreciated. FTP wrote:Do you mean HTTP download link? HTTP download link is designed for downloading by external download-tool (like flashget), not for file sharing. So that link will be available when user logged in, when he logout, it won't be available any more. Ataman wrote:Can i use the direct link (to download file) withouth loging in to the system? if not, is there an option to download file without user and pass? or like in Flashget to enter user & Pass for every download (like Rapidshare). Users browsing this forum: No registered users and 3 guests Powered by phpBB © phpBB Group.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00533-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,191
9
https://argentolee.wordpress.com/about/
code
An ordinary IT guy who loves automation. I like to dive into technical knowledge, learning how things work behind the scene. This had help me a lot on troubleshooting issues. I have a Bachelor of Computing and is a hobbyist programmer. Currently playing around with the following gadgets: - RaspberryPi 1 & 2 - Atlas Wearable
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592150.47/warc/CC-MAIN-20180721012433-20180721032433-00561.warc.gz
CC-MAIN-2018-30
325
5
https://reddium.vercel.app/r/Republican_misdeeds/comments/xt0r16/depressed_by_russias_military_failures/iqnfyla
code
Star on GitHub Depressed by Russia's military failures, Kremlin-controlled state TV beamed in Scott Ritter to tell them how great the Russian military is doing and how much Americans respect Russia. Add a comment... Return to post Pedophile traitor. Get it right
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00553.warc.gz
CC-MAIN-2022-49
262
5
http://www.win-vector.com/blog/category/administrativia/
code
In R the [[ ]] is the operator that (when supplied a simple scalar argument) pulls a single element out of lists (and the [ ] operator pulls out sub-lists). For vectors [[ ]] and [ ]appear to be synonyms (modulo the issue of names). However, for a vector [[ ]] checks that the indexing argument is a scalar, so if you intend to retrieve one element this is a good way of getting an extra check and documenting intent. Also, when writing reusable code you may not always be sure if your code is going to be applied to a vector or list in the future. It is safer to get into the habit of always using [[ ]] when you intend to retrieve a single element. Example with lists: #> "a" #> "a" Example with vectors: #> "a" #> "a" The idea is: in situations where both [ ] and [[ ]] apply we rarely see [[ ]] being the worse choice. Note on this article series. This R tips series is short simple notes on R best practices, and additional packaged tools. The intent is to show both how to perform common tasks, and how to avoid common pitfalls. I hope to share about 20 of these about every other day to learn from the community which issues resonate and to also introduce some of features from some of our packages. It is an opinionated series and will sometimes touch on coding style, and also try to showcase appropriate Win-Vector LLC R tools. Dr. Nina Zumel will be presenting “Myths of Data Science: Things you Should and Should Not Believe”, Sunday, October 29, 2017 10:00 AM to 12:30 PM at the She Talks Data Meetup (Bay Area). ODSC West 2017 is soon. It is our favorite conference and we will be giving both a workshop and a talk. Thursday Nov 2 2017, “Modeling big data with R, Sparklyr, and Apache Spark”, Workshop/Training intermediate, 4 hours, by Dr. John Mount (link). Friday Nov 3 2017, “Myths of Data Science: Things you Should and Should Not Believe”, Data Science lecture beginner/intermediate, 45 minutes, by Dr. Nina Zumel (link, length, abstract, and title to be corrected). We really hope you can make these talks. On the “R for big data” front we have some big news: the replyr package now implements pivot/un-pivot (or what tidyr calls spread/gather) for big data (databases and Sparklyr). This data shaping ability adds a lot of user power. We call the theory “coordinatized data” and the work practice “fluid data”.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00644.warc.gz
CC-MAIN-2018-09
2,360
26
https://halfelf.org/tag/development/page/5/
code
When I was at WordCamp Tokyo, I was reminded of the power of a thank you and how it makes Open Source better. Is Bootstrap making the biggest mistake by upgrading a component when they don’t need the benefits? Replacing a variety of templates with cleverly built layout files. Figuring out collections with Jekyll is kind of like Custom Post Types. It’s back to the future when you make too many commits in git! Who are you writing your code for? How will they use it? Do they need simple or complex? Messing with static locations for static things on a website. After five months of CloudFlare, I’ve turned it off. Unsatisfied. Making things more secure for all things. Especially if you know it’s bad. If you know it’s bad and you publish it, you’re reckless. When we fail, we learn. But we can do more than just learn how to code better when we write bad code. We can learn to be better people. Automation is king. So is content of course, but automating the code behind the content will save you time, money, and headaches.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00100.warc.gz
CC-MAIN-2019-35
1,039
12
http://freecode.com/tags/slovak?page=1&sort=vitality&with=447&without=
code
Gammu (formerly known as MyGnokii2) is a cellular manager for various mobile phones/modems. It supports a wide variety of Nokia, Symbian, and AT devices (Siemens, Alcatel, Falcom, WaveCom, IPAQ, Samsung, SE, and others) over cables, infrared, or BlueTooth. It contains libraries with functions for ringtones, phonebook, SMS, logos, WAP, date/time, alarm, calls, and more (used by external applications like Wammu). It also includes a command line utility that can make many things (including backups) and an SMS gateway with full MySQL and PostgreSQL support from the PHP interface. Simple Groupware is a complete enterprise application offering email, calendaring, contacts, tasks, document management, synchronization with cell phones and Outlook, full-text search, and much more. Simple Groupware combines standards like RSS, iCalendar, vCard, IMAP, POP3, SMTP, CIFS, CSV, WebDAV, LDAP, and SyncML under one platform. Unlike other groupware software, Simple Groupware contains the programming language sgsML to enable the quick customization and creation of powerful Web applications. HTTrack is an easy-to-use offline browser utility. It allows you to download a Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the mirrored Web site in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. WebHTTrack is a Web-based GUI for HTTrack. Free-SA is tool for statistical analysis of daemons' log files, similar to SARG. Its main advantages over SARG are much better speed (7x-20x), more support for reports, and W3C compliance of generated HTML/CSS reports. It can be used to help control traffic usage, to control Internet access security policies, to investigate security incidents, to evaluate server efficiency, and to detect troubles with configuration. Zim is a graphical text editor used to maintain a collection of wiki pages. Each page can contain links to other pages, simple formatting, and images. Pages are stored in a folder structure, like in an outliner, and can have attachments. Creating a new page is as easy as linking to a nonexistent page. All data is stored in plain text files with wiki formatting. Various plugins provide additional functionality, like a task list manager, an equation editor, a tray icon, and support for version control. Zim can be used to keep an archive of notes, take notes during meetings or lectures, organize task lists, draft blog entries and email, or do brainstorming. Passwd_exp notifies users via email of upcoming password or account expiration. Its simple modular architecture allows you to perform expiration checks on any data source you use (SQL databases, LDAP...), send expiration warnings only to desired users or group and on selected days only. Administrators can use it to review expired accounts in the system. Support for Linux and Solaris shadow (including LDAP and NIS systems) and BSD passwd systems is included. Tux Paint is a simple and entertaining drawing program geared towards young children. It has a simple interface, sound effects, and a cartoon character (Tux, the Linux penguin). Along with drawing brush strokes, lines and shapes, you can also enter text and place "rubber stamp" (or "sticker") images on the picture. Tux Paint is extensible, and could be useful in an educational environment (such as a grammar, elementary, or grade school). It's portable across numerous platforms, and runs well even on slower systems like the Pentium 133MHz. TYPOlight is a content management system (CMS) for people who want a professional Internet presence that is easy to maintain. The state-of-the-art structure of the system offers a high security standard and allows you to develop search engine friendly Web sites that are also accessible for people with disabilities. Furthermore, the system can be expanded flexibly and inexpensively. It features easy management of user rights, a Live Update Service, a modern CSS framework, and many already integrated modules (news, calendar, forms, etc.).
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00396-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
4,290
8
https://bunifuframework.com/topic/trial-version-crash-vs2010/
code
I am evaluating the Library in Visual Studio 2010. I downloaded the library and I activated the trial period correctly. The first day, I was able to create a project an “use” any of Bunifu components. Since the second day, when I work on this project, Visual Studio crash once I place a component integrated in Bunifu_UI_v1.5.3.dll. With all others components wich are using specific dll (button, Textbox…) , it works fine. What can I do ? Viewing 0 reply threads You must be logged in to reply to this topic.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00528.warc.gz
CC-MAIN-2020-24
515
7
https://techutils.in/blog/2017/05/19/stackbounty-magento2-api-integration-mule-magento2-with-mule/
code
I would like to ask You about integration Magento2 with Mule. Have you done something like that? How you did integration? By using Magento’s default Apis or custom one? Have you done some base-schema-connector for projects? Fell free to write your experience (good and bad) about that.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804125.49/warc/CC-MAIN-20171118002717-20171118022717-00111.warc.gz
CC-MAIN-2017-47
287
5
http://www.overclock.net/t/201482/slow-computer-in-odd-ways
code
Well, my computer in general is fast. However, on very basic things it seems to be pretty slow; dragging windows colors will track behind it, the start menu might delay a little bit, sometimes loading a new windows. Very odd with all things considered; does anyone know what could cause the sluggishness in those particular places? Games run well, but Windows itself seems slow. post #1 of 3 6/16/07 at 12:57am
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685129.23/warc/CC-MAIN-20170919112242-20170919132242-00425.warc.gz
CC-MAIN-2017-39
410
3
https://deepai.org/publication/resource-allocation-using-gradient-boosting-aided-deep-q-network-for-iot-in-c-rans
code
The requirement and development of Internet of Things (IoT) services, a key challenge in 5G, have been continuously rising, with the expanding diversity and density of IoT devices. Cloud radio access networks (C-RANs) are regarded as the promising mobile network architecture to meet this new challenge. Specifically, C-RANs separate base stations into radio units, which are commonly referred as remote radio heads (RRHs), and signal processing centralized baseband unit (BBU) Pool. In a C-RAN, BBU can be placed in a convenient and easily accessible place, and RRHs can be deployed up on poles or rooftops on demand. It is expected that C-RAN architecture will be an integral part of future deployments to enable efficient IoT services. Dynamic resource allocation (DRA) for IoT in C-RANs is indispensable to maintain acceptable performance. In order to get the optimal allocation strategy, several works have tried to apply convex optimizations, like second order cone programming (SOCP) in , semi-definite programming (SDP) in and mix-integer programming (MIP) in . However, in real-time C-RANs where the environment keeps changing, the efficiency of the above methods in finding the optimal decision faces great challenges. Attempts have been made in reinforcement learning (RL) to increase the efficiency of the solution procedure in . RL has shown its great advantages to solve DRA problems in wireless communication systems and for IoT. Existing methods to DRA problem in RANs generally model it as a RL problem , by setting different parameters as the reward. For instance, the work in regarded the successful transmission probability of the user requests as the reward, and another work in set the sum of average quality of service (QoS) and averaged resource utilization of the slice as the reward. However, with the increase of the complexity in allocation problems, the search space of solutions tends to be infinite, which is hard to be tackled. With the combination of RL and deep neural network (DNN), deep reinforcement learning (DRL) has been proposed and applied to address the above problems in . By utilizing the ability of extracting useful features directly from the high-dimensional state space of DNN, DRL is able to perform end-to-end RL . With the assistance of DNN, problems of large search space and continuous states are no longer the insurmountable challenges. To apply DRL framework in DRA problems, the design of reward, action and state becomes vital. The action set needs to be enumerable in most circumstances. The work in used a two-step decision framework to guarantee its enumerability, by changing the state of one RRH at each epoch, which performs well in the models with innumerable states. Furthermore, in DRA problems, how to get optimal allocation strategy will be finally turned into another optimization problem in most cases, i.e., convex optimization problem , which can be solved mathematically. Unfortunately, traditional algorithms for solving the convex optimization problem, such as SOCP still faces significant limitations, such as time-consuming, making it hard to generate a policy for large-scale systems. Recent works have achieved significant improvement in computational efficiency by applying the DNN approximator to DRA problems. However, the unstable performance of DNN in regression process makes it hard to achieve good performance . With a large number of hyper-parameters, fine tuning becomes even harder in practical system. Some researchers discussed and investigated this problem in computability theory and information theory domains, e.g., in . . It has been firmly established as one of state-of-the-art approaches in machine learning (ML) community, and it has played a dominating role in existing data mining and machine learning competitions due to its fast training and excellent performance. However, to the best of our knowledge, few works applied this method to the DRA problem, even to other regression problems in communication systems. In this paper, to efficiently address DRA problem for IoT in C-RANs with innumerable states, one common form of DRL, namely the deep Q-network (DQN) is employed. Moreover, to tackle the difficulties in obtaining the reward in DQN in low latency, a tree-based GBM, i.e., gradient boosting decision tree (GBDT) is utilized to approximate the solutions of SOCP. Then, we demonstrate the improvement of our method by comparing it to the traditional methods under simulations. The main contributions of this paper are as follows: We first give the model of dynamic resource allocation problem for IoT in the real-time C-RAN. Then, we propose a GBDT-based regressor to approximate the SOCP solution of the optimal transmitting power consumption, which serves as the immediate reward needed in DQN. By doing so, there is no need to solve the original SOCP problem every time, and therefore great computational cost can be saved. Next, we aggregate the GBDT-based regressor with a DQN to propose a new framework, where the immediate reward is obtained from GBDT-based regressor instead of SOCP solutions, to generate the optimal policy to control the states of RRHs. The proposed framework can save the power consumption of the whole C-RAN system for IoT. We show the performance gain and complexity reduction of our proposed solution by comparing it with the existing methods. The remainder of this paper is organized as follows. Section II presents the related works, whereas system model is given in Section III. Section IV introduces the proposed GBDT-based DQN framework. The simulation results are reported in Section V, followed by the conclusions presented in Section VI. Ii Related Works The resource allocation problem under C-RANs is normally interpreted into an optimization problem, where one needs to search the decision space to find an optimal combinatorial set of decisions to optimize different goals based on current situations. Although numerous researchers devoted their time in finding solutions to optimization problems, most of them are still hard or impossible to be tackled with traditional pure mathematical methods. RL has been recently applied to address those problems. In , a model-free RL model was adopted to solve the adaptive selection problem between backhaul and fronthaul transfer modes, which aimed to minimize the long-term delivery latency in fog radio access network (F-RAN). Specifically, an online on-policy value-based strategy State-Action-Reward-State-Action (SARSA) with linear approximation was applied in this system. Moreover, some works have proposed more efficient RL methods to overcome slow convergence and scalability issues in traditional RL-based algorithms, such as Q-learning. In , four methods, i.e. state space reduction techniques, convergence speed up methods, demand forecasting combined with RL algorithm and DNN were proposed to handle the aforementioned problems, especially to deal with the huge state space. Furthermore, as reported in , DQN achieved a better performance on resource allocation problems, compared with the traditional Q-learning based method. In practice, the size of possible state space may be very large or even infinite, which makes it impossible to traverse each state that required by the traditional Q-learning. Approximation methods can address this kind of problem that they maps the continuous and innumerable state space to a near-optimal Q-value space in consecutive setting, rather than Q-table. DNN shows its advantage of approximation in the high-dimensional space in many domains. Therefore, the adoption of DNN to estimate Q-value can improve the system performance and computing efficiency, as reported in the simulation results from. In , a two-step decision framework was adopted to solve the enumerability problem of action space in C-RANs. The DRL agent first determined which RRH to turn on or turn off, and then the agent got the resource allocation solution by solving a convex optimization problem. Any other complex actions can be decomposed into the two-step decision, reducing the action space significantly. Moreover, the work in shows the impractical use of SA (i.e., Single BS Association) scheme even in a small-scale C-RAN. Specifically, SA scheme abandoned the collaboration of each RRH and only supported few users. This research is a guidance to our research. The works in and all adopted the DRL method to solve resource allocation problems in the RAN settings. In , the concept of intelligent allocation based on DRL was proposed to tackle the cache resource optimization problem in F-RAN. To satisfy user’s QoS, the caching schemes should be intelligent, i.e. more effective and self-adaptive. Considering the limitation of cache space, this requirement challenges the design of schemes, and it motivates the adoption of DRL technique. As reported in , a DRL-based framework is used in more complicated resource allocation problems, i.e., virtualized radio access networks. Based on the average QoS utility and resource utilization of users, the DQN-based autonomous resource management framework can make virtual operations to customize their own utility function and objective function based on different requirements. In this paper, to improve the system efficiency, we propose a novel gradient-boosting-based DQN framework for resource allocation problem, which significantly improves the system performance through offline training and online running. To the best of our knowledge, there is few works to apply gradient boosting machine to approximate solutions of convex optimization problems in wireless communication and we are the first to propose this framework. Iii System Model Iii-a Network Model We consider a typical C-RAN architecture where there is a single cell model with a set of RRHs denoted by and a set of users which can be some IoT devices denoted by . In the DRA for IoT in C-RAN as shown in Fig. 1, we can get the current states, i.e. the state of each RRHs and the demands of IoT device users, from the networks in -th decision epoch . All the RRHs are connected to the centralized BBU pool, meaning all information can be shared and processed by the DQN-based agent to make decisions, i.e. turning on or off the RRHs. We simplify the model by making assumption that all RRHs and users are equipped with a single antenna, which is readily to be generalized into the multi-antenna case by using technique proposed in . Then, the corresponding signal-to-interference-plus-noise ratio (SINR) at the receiver of -th user can be given as: denotes the channel gain vector and each elementdenotes the channel gain from RRH to user ; denotes the vector of all RRHs beamforming to user and each element denotes the weight of beamforming vector in RRH distributed to user and is the noise. According to the Shannon formula, the data rate of user can be given as: where is the channel bandwidth and is the SINR margin depending on a couple of practical considerations, e.g., the modulation scheme. The relationship of the transmitting power and the power consumed by the base station can be approximated to be nearly linear, according to . Then, we apply the linear power model for each RRH as: where is the transmitting power of RRH ; is a constant denoting the drain efficiency of the power amplifier; and is the power consumption of RRH when is active without transmitting signals. In the case of no need for transmission, can be set to the sleep mode, whose power can be given by . Thus, one has . In addition, we take consideration of the power consumption for the state transition of RRHs, i.e. the power consumed to change RRHs’ states. We put the RRHs which reverse states in the current epoch to the set and use to denote the power to change the mode between and , i.e. we assume they share the same power consumption. Therefore, in the current epoch, the total power consumption of all RRHs can be written as: From Equation (4), one can see that the latter two parts are easy to be calculated, which are composed by some constants and only relying on the current state and action. To minimize , it is necessary for us to calculate the minimal transmitting power in each epoch, which depends on the allocation scheme of beamforming weights in active RRHs. Therefore, this optimization problem can be expressed as: Control Plane (CP)-Beamforming: where the objective is to get the minimal total transmitting power given the states of RRHs and user demands. Also, the variables are distributive weights corresponding to beamforming power; is defined as the user demand; is given by Equation (1) and is the transmitting power constraint for RRH . Also, Constraint (5.2) ensures the demand of all users will be met, whereas Constraint (5.3) ensures the limitation of transmitting power in each RRH. As shown in , the above CP-beamforming can be transformed into a SOCP problem. Therefore, we rewrite the above optimizations as: where we apply variable to replace the optimization (5.1) by adding Constraint (6.3), which is a common method in transformation process . We also rewrite Constraint (5.2) as Constraint (6.1) and apply some simple manipulations to get the above modified optimization. Now, it is ready to see that the above Modified CP-Beamforming optimization is the same as a standard SOCP problem. By using the iterative algorithm mentioned proposed in , we can get the optimal solutions. It is worth noting that the CP-Beamforming optimization may have no feasible solutions. In this case, it means more RRHs should be activated to satisfy the user demands. In this case, we will give a large negative reward to the DQN agent and jump out of the current training loop. Then, we can calculate the total power consumption by applying Equation (4). In the following part, we propose the DQN-based framework to predict the states of RRHs and adopt GBDT to approximate the solutions of the aforementioned SOCP problems. Iv GBDT Aided Deep Q-Network for DRA in C-RANs Iv-a State, Action Space and Reward Function Our goal in the aforementioned DRA problem is to generate a policy that minimizes the system’s power consumption at any state by taking the best action. Here, the best action refers to the action that contributes the least to overall power consumption in a long term but also satisfies user demands, system requirements and constraints among all the available actions. The fundamental idea of RL-based method is to abstract an agent and an environment from the given problem to generate the environment model and employ the agent to find the optimal action in each state, so as to maximize the cumulative discounted reward by exploring the environment and receiving immediate reward signalled by the environment. To apply RL method in our problem, we transform the system model defined in Section III into a RL model. The general assumption that future reward is discounted by a factor of per time-step is made here. Then, the cumulative discounted reward from time-step can be expressed as: where denotes mathematical expectation; denotes the -th reward; denotes the -th state and denotes the discount factor. If tends to 0, the agent only considers the immediate reward; whereas if tends to 1, the agent focuses on the future reward. Moreover, the infinity over the summation sign indicates the endless sequence in DRA problem. Leveraging the common definition in Q-learning, the optimal action-value function is defined as the greatest mathematical expected cumulative discounted reward reached by taking action in state and then following a subsequently optimal policy, which guarantees the optimality of cumulative future reward. The function strongly follows the Bellman equation, a well-known identity in optimality theory. In this model, the optimal action-value function to represent the maximum cumulative reward from state with action can be expressed as: where denotes the immediate reward received at state if action is taken; denotes the possible action in the next state , and other symbols are of the same meaning as Equation (7.1). The expression means that the agent takes action in the state , receiving the immediate reward , and then subsequently follows an optimal trajectory that leads to greatest value. In a general view, demonstrates how promising the final expected cumulative reward will be if action is taken in state in a quantitative way. That is to say, in DRA problem, how much power consumption the C-RAN can cut down if it decides to take the action , i.e switches on or off one selected RRH when observing the state , i.e. a set of user demands and the states (i.e. sleep/active) of RRHs. Since the true value of can never be known, our goal is to employ DNN to learn an approximation . For the following sections, just denotes the approximated and has all the same properties of . The generic policy function defined in the context of RL is used here, which can be expressed as: where is the argmax of the action-value function over all possible actions in a specific state . The policy function leads to the action that maximize the values in all states. The state, action and reward defined in our problem are given as: State: The state has two components that one is a set of states of RRHs and the other is a set of demands from users. Specifically, is defined as the set of all RRHs’ states, in which denotes the state of RRH . In the case of , RRH is in the sleep state, whereas means that it is in the active state. is defined as the set of all users’ demands, and denotes the demand of user , in which is the minimum of all demands and is the maximal demand. Thus, the state of RL is expressed as and its cardinality is . Action: In each decision epoch, we enable the RL agent to determine the next state of one RRH. We use a set of to denote the action space, in which . If , it means RRH changes the state, otherwise the RRH remains its current state in next epoch. Then, the action space can be substantially reduced. It is noteworthy that we set the constriction that , which means only one or none of all RRH states will alter its state and reduces the space into the size of . Reward: To minimize the total power consumption, we define the immediate reward as the difference between the upper bound of power consumption. The actual power consumption is expressed as: where denotes the upper bound of the power consumption obtained from the system setting, and denotes the actual total power consumption of the system that is composed of three parts defined in Equation (4). To be more specific, the reward is defined to minimize the system power consumption under the condition of satisfying the user demands, which requires us to solve the optimization problem according to Equation (6), shown in Section III. To sum up, the policy mentioned in this work is a function that maps the current state , the set of user demand and RRHs status, to the best action , turning on or off one RRH, that minimizes the overall power consumption of the whole system. Iv-B Gradient Boosting Decision Tree GBM is a gradient boosting framework that can be applied to any classifiers or regressors. To be more specific, GBM is the aggregation of base estimators (i.e., classifiers or regressors) that any base estimators likenearest neighbor, neural network and naive Bayesian estimators can be fitted into the GBM. Better base estimators advocate higher performance. Among all kinds of GBM, a prominent one is based on decision tree, called gradient boosting decision tree (GBDT), which has been gaining its popularity for years due to its competitive performance in different areas. In our framework, the GBDT is applied to the regression task due to its prominent performance. The concept of GBDT is to optimize the empirical risk via steepest gradient descent in hypothesis space by adding more base tree etimators. Considering the regression task in our work, given a dataset with entities of different states and their corresponding rewards generated by simulation and solving SOCP, one can have where denotes the state representation of system model, whereas denotes the corresponding solution of SOCP solver from Equation (6), in line with the definition of the Reward function. To optimize the empirical risk of regression is to minimize the expectation of a well-defined loss function over the given dataset, which can be express as: where denotes the model itself and is the final mapping to approximate , which is our fitting object, the power comsumption. is the set of representing system model, and is the set of representing solution of SOCP solver. Here the first term is model prediction loss, which is a differentiable convex function to measure the distance between true power consumption and estimated power consumption; and loss (i.e., mean-square error) is applied in this task. The latter term is the regularization penalty applied to constrain model complexity, contributing to finalize a model with less over-fitting and better generalization performance. The choice of prediction loss and regularization penalty alters circumstantially. Also, the penalty function is given by: where and are two hyper-parameters, while and are the numbers of trees ensembled and weights owned by each tree, respectively. When the regularization parameter is set to zero, the loss function falls back to the traditional gradient tree boosting method . In GBDT, it starts with a weak model that simply predicts the mean value of at each leaf and improves the prediction by aggregating additive fixed size decision trees as base estimators to predict the pseudo-residuals of previous results. The final prediction is linear combination of all the output from regression trees. The final estimator function as adverted in (9) can be expressed as follow: where is the initial guess, is the base estimator at the iteration and is the weight for the estimator or a fixed learning rate. The product denotes the step at iteration . Iv-C GBDT-based Deep Q-Network (DQN) In this section, we will show how to apply GBDT-based DQN scheme to solve our DRA problem for IoT in real-time C-RAN, by using the previously defined states, actions and reward. Traditional RL methods, like Q-learning, compute and store the Q value for each state-action group into a table. It is unrealistic to apply those methods in our problem, as the state-action groups are countless and the demands of users in a state are continuous variables. Therefore, DQN is considered to be best solutions for this problem. Similar with the related works, e.g. , we also apply experience replay buffer and fixed Q-targets in this work to estimate the action-value function . In our framework, two stages are included, i.e., offline training and online decision making as well as regular training: For offline training stage, we pre-train DQN to estimate the value of taking each action in any specific states. To achieve this, millions of system data are generated in terms of all RRHs’ states, user demands and its corresponding system power consumption by simulation and solving SOCP problem given in equation (6). Then, the GBDT is employed to estimate the immediate reward to alleviate the expensive computation in solving the SOCP problem for further training and tuning. For online decision making and regular tuning, we load the pre-trained DQN to generate the best action to take for our proposed DRA problem in real-time. This is achieved by employing the policy function defined in (7.3), which maximizes the in state . To emphasize, the function tells how much the system can cut down the power consumption if it decides to take the action when seeing the state . Then, the DQN observes the immediate reward obtained from GBDT approximation and observes next state . In an online regular tuning scheme, the DQN will not immediately update model parameters when observing new states but to store the new observations to memory buffer. Then, under some given conditions, the DQN will fine-tune its parameters according to that buffer. This allows DQN to dynamically adapt to new patterns regularly. The whole algorithm is given in Algorithm 1, whereas the framework of GBDT-based DQN is given by Fig. 2. The denotes the set of model parameters. The loss function is loss (i.e., mean-square error), which indicates the difference between Q target and model output. S refers to the step in Algorithm 1. In Fig. 2, one can see that the left side describes a DQN framework, illustrating the agent, the environment and how to get the reward. Specifically, the agent will observe a new state from the environment after taking an action and then it will receive an immediate reward signalled by the reward function from GBDT approximator. Traditional DQN obtains the reward by solving the SOCP optimization, which can not be real-time, as explained before. In our architecture, we adopt GBDT regression (i.e., the right side of Fig. 2) to obtain the reward, which can operate in a online process in real-time. We also give the training process of GBDT in the Appendix. Iv-D Error Tolerance Examination (ETE) Our target is to use GBDT to approximate the typical SOCP problem in C-RANs under the framework of DQN. Thus, it is important to evaluate its practical performance. The error from GBDT or DNN will influence the optimality of the given scheme, even worsening the performance of whole system power consumption. Therefore, the examination of error influence is of vital significance. Considering its important role in the whole DRA problem, we emphasize the concept of error tolerance examination (ETE) here. Specifically, in the simulation, we will first compare the result of the optimal decision provided by CP-Beamforming solution with the near-optimal decision from GBDT or DNN approximation solution, and then evaluate its performance in the dynamic resource allocation settings. V Simulation Results In this section, we present the simulation settings and performance of the proposed GBDT-based DQN solutions. We take the definition of channel fading mode from previous work as : where is the path loss with the distance of ; is the antenna gain; is the shadowing coefficient and is the small-scale fading coefficient. The simulation settings are summarized in Table I. All training and testing processes are conducted in the environment equipped with 8GB RAM, Intel core i7-6700HQ (2.6GHz), python 3.5.6, tensorflow 1.13.1 and lightGBM 2.2.3. |Channel bandwidth||10 MHz| |Max transmit power||1.0 W| |Active power||6.8 W| |Sleep power||4.3 W| |Transition power||2.0 W| |Background noise||-102 dBm| |Antenna gain||9 dBi| |Log-normal shadowing||8 dB| |Rayleigh small-scale fading| |Path loss with a distance of (km)||dB| |Distance||Uniformly distributed in m| |Power amplifier efficiency||25%| |W Watt, dB decibel, dBm decibel-milliwatts, dBi dB(isotropic).| We compare our DQN-based solution containing GBDT approximator (abbreviated as DQN) with two other schemes: 1) All RRHs Open (AO): all RHHs are turned on, which can serve each user; 2) One RRH Closed (OC): one of those RHHs (chosen randomly) stays in the sleep state, which cannot serve any user. It is noteworthy that the in previous work , another solution in which only one random RRH is turned on, is also discussed in the dynamic resource allocation problem. However, it can hardly be applied to the practical systems . Therefore, we do not compare it in this paper. V-a GBDT-based SOCP Approximator V-A1 Computational Complexity We compare computational complexity between a GBDT approximator and solutions from traditional SOCP solver in . Firstly, a test set of 1000 entities are randomly generated in terms of status of RRHs and user demands. In addition, both the GBDT approximator and the traditional SOCP method are executed to predict or compute the outputs of that test set for 10000 times, respectively. One can see from Table II that GBDT approximator is much faster than SOCP solver, which prove the efficiency of GBDT approximator. |System Input Setup||Average Time Per Input| |6 RRHs and 3 users||0.00079||0.08281| |8 RRHs and 4 users||0.00077||0.09387| |12 RRHs and 6 users||0.00070||0.16240| |18 RRHs and 9 users||0.00075||0.42803| |The time in above table is obtained by averaging 1000 different system inputs, each of which is recalculated by 10000 times through two algorithms respectively.| V-A2 Fitting Property Then, we analyse the performance of GBDT approximator in specific situations, where we set that there are 8 RRHs and 4 users of IoT devices whose demands are ranging from 20Mbps to 40Mbps respectively. We compare it with DNN approximator. It applies the fully-connected net with 3 layers, each of which with 32, 64, 1 neurons respectively. Its activation function is a rectified linear unit (ReLU). Firstly, in Fig.3(a), we assume that all 8 RRHs are turned on. One can see from this figure that GBDT has better fitting performance than DNN. Then, we assume that there is one RRH switched off. One can see from Fig. 3(b) that GBDT still fits very well with the SOCP solutions. In Fig. 3(c), we assume that the states of all 8 RRH are set switched on or off randomly. As expected, GBDT has much better fitting performance, compared with the SOCP solutions. V-B Training Effect of GBDT and DNN We demonstrate the training performance between the GBDT approximator and DNN aproximator by comparing the training effect in Fig. 4. Mean squared error (MSE) is used here to calculate the loss. From Fig. 4, one can see that even trained with far more time, the loss of DNN is still higher than that of GBDT. One also notices that GBDT has less parameters to adjust and therefore has quicker training process. The specific comparison is not unfolded here, as it is not the focus of this paper. Next, we will examine the performance of GBDT-based DQN solutions. V-C System Performance In this section, we consider there are 8 RRHs and 4 users, whose demands are randomly selected. We change the user demands every 100 ms. The performance of AO, OC and GBDT-based DQN is compared next. V-C1 Instant Power We examine the instant system power consumption in this subsection. In the top figures of Fig. 5(a) and Fig. 5(b), we compare the strategies of AO and DQN, where we set all the RRHs open initially and then all RRHs stay active in AO schemes. In the bottom figures of Fig. 5(a) and Fig. 5(b), we turn off one RRH randomly at the beginning for both OC and DQN and then one RRH stay switched off in OC scheme. Moreover, we set user demands are selected randomly from the set of 20Mbps to 40Mbps in Fig. 5(a), whereas we randomly select user demands from the set of 20Mbps to 60Mbps in Fig. 5(b). One can see from all the figures in Fig. 5 that our proposed DQN always outperforms AO and OC. This is because DQN controls RRHs to turn on and off depending on the current states of the systems, whereas AO always turns on all the RRHs and OC randomly turns off one RRH, which may not be the optimal strategy and contribute to larger power consumption than DQN. One can also see that when we increase the upper limit of user demands from 40Mbps in Fig. 5(a) to 60Mbps in Fig. 5(b), the performance of all DQN, OC and AO become more unstable. However, our proposed DQN still has the best performance when compared with AO and OC. Moreover, one can see that although there may be some errors caused by GBDT approximator, our proposed DQN framework has considerable performance, which shows the good ability of error tolerance in our proposed solution. V-C2 Average Power In Fig. 6, we show the performance comparison between GBDT-based DQN, AO and OC in the long term. The DQN with reward obtained from SOCP solver is also depicted. We compare the average system power consumption by averaging all instant system power in the past time slots. We first analyse the performance under the condition of user demands below 40Mbps between both DQN schemes (including GBDT and SOCP) and AO scheme. We set all the RRH switched on and set user demands changed every 100 ms per slot and lasting for 500s. One can see from Fig. 6(a) that both DQN schemes outperform AO and can save power around 8 Watts per time slot. The slight fluctuation comes from the randomness of the requirement. Moreover, one can see from Fig. 6(a) that DQN with GBDT have the similar performance as the DQN scheme with SOCP solver, which shows the error tolerance feature of our proposed solutions. Then we turn one RRH off and continue to analyse the average system power consumption under DQN and OC scheme. One can see from Fig. 6(b) that both DQN schemes still outperform OC scheme, as expected. Also, one can see that DQN scheme with GBDT has the similar performance as SOCP solver, similarly with above. V-C3 Overall Performance of GBDT-based DQN To evaluate the overall performance of GBDT-based DQN in different situations, we set user demands from 20Mbps to 60Mbps with 10Mbps interval, and keep other factors unchanged. One can see from Fig. 7(a) and Fig. 7(b) that with the increase of user demands, the power consumption of AO, OC and DQN increase as well. One also sees that our proposed GBDT-based DQN have much better performance than AO and OC, as expected, which prove the effectiveness of our scheme. In this paper, we presented a GBDT-based DQN framework to tackle the dynamic resource allocation problem for IoT in the real-time C-RANs. We first employed the GBDT to approximate the solutions of the SOCP problem. Then, we built the DQN framework to generate a efficient resource allocation policy regarding to the status of RRHs in C-RANs. Furthermore, we demonstrated the offline training, online decision making as well as regular tuning processes. Lastly, we evaluated the proposed framework with the comparison to two other methods, AO and OC, and examined its accuracy and the ability of error tolerance compared with SOCP-based DQN scheme. Simulation results showed that the proposed GBDT-based DQN can achieve a much better performance in terms of power saving than other baseline solutions under the real-time setting. Future work is in progress to let GBDT approximator meet the strict constraints of practical problems, which is expected to be employed in a wide range of scenarios. [Training and Predicting Process of GBDT] The training process of GBDT is shown in Algorithm 2. The GBDT is consisted of two concepts, where one is called the gradient and the other is boosting. In training process, the 0-th tree is fitted to the given training dataset, and it predicts the mean value of in the training set regardless of what the input is; the predicted values of 0-th tree are denoted as . However, the predictions from the 0-th tree still have residuals between true values . Then, another additive tree is applied to fit to the new dataset that the inputs are same as the 0-th tree, but the fitting target ’s are the residuals . Then, the predictions of the GBDT are the linear combination of the predictions from the 0-th tree and the new additive tree, namely , where is the weight attributed to this tree. Next, another tree is fitted to the new residuals and follow the same process as before. From above process, one can see that the boosting concept is to utilize the residuals between the previous ensembled results and true values. By learning from the residual, the model can make progress when new trees are added. The gradient part of concept can be explained as that the whole training process is supervised and guided by the gradient of objective function, where it is typically expressed as , whose derivative is the pseudo-residual between and . - J. Lin et al., “A survey on Internet of Things: Architecture enabling technologies security and privacy and applications,” IEEE Internet Things J., vol. 4, no. 5, pp. 1125-1142, Oct. 2017. - A. Checko et al., “Cloud RAN for mobile networks–A technology overview,” IEEE Commun. Surveys Tuts., vol. 17, no. 1, pp. 405–426, Sep. 2014. - Z. Xu, Y. Wang, J. Tang, J. Wang, and M. C. Gursoy, “A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs,” in Proc. IEEE Int. Conf. Commun. (ICC), pp. 1–6, 2017. - A. Wiesel, Y. C. Eldar, and S. Shamai, “Linear precoding via conic optimization for fixed MIMO receivers,” IEEE Trans. Signal Process., vol. 54, no. 1, pp. 161–176, 2006. - M. Gerasimenko et al., “Cooperative radio resource management in heterogeneous cloud radio access networks,” IEEE Access, vol. 3, pp. 397–406, 2015. - Y. Zhou et al., “Deep reinforcement learning based coded caching scheme in fog radio access networks,” 2018 IEEE/CIC International Conference on Communications in China (ICCC Workshops), pp. 309–313, 2018. - P. Rost et al., “Cloud technologies for flexible 5G radio access networks,” IEEE Commun. Mag., vol. 52, no. 5, pp. 68–76, 2014. - G. Sun et al., “Dynamic reservation and deep reinforcement learning based autonomous resource slicing for virtualized radio access networks,” in IEEE Access, vol. 7, pp. 45758–45772, 2019. - V. François-Lavet et al. “An introduction to deep reinforcement learning.” Foundations and Trends in Machine Learning, vol. 11, no. 3–4, pp. 219–354, 2018. H. He et al. , “Model-driven deep learning for physical layer communications,”arXiv preprint arXiv:l809.06059, 2019. - H. Zhu et al., “Caching transient data for Internet of Things: A deep reinforcement learning approach,” IEEE Internet Things J., vol. 6, no. 2, pp. 2074–2083, Apr. 2019. - H. Zhu, Y. Cao, W. Wang, T. Jiang, and S. Jin, “Deep reinforcement learning for mobile edge caching: Review new features and open issues,” IEEE Netw., vol. 32, no. 6, pp. 50–57, Nov. 2018. - D. Liu et al., “User association in 5G networks: A survey and an outlook,” IEEE Commun. Surveys Tuts., vol. 18, no. 2, pp. 1018–1044, 2nd Quart. 2015. - A. Domahidi, E. Chu, and S. Boyd, “Ecos: An socp solver for embedded systems,” Control Conference (ECC) 2013 European, pp. 3071–3076, 2013. E. Andersen and K. Andersen, “The MOSEK interior point optimizerfor linear programming: an implementation of the homogeneousalgorithm,”High Performance Optimization, vol. 33, pp. 197–232, 2000. - J. F. Sturm, “Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones,” Optimization Methods and Software, vol. 11, no. 1–4, pp. 625–653, 1999. - K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proceedings of the 27th International Conference on International Conference on Machine Learning. Omnipress, pp. 399–406, 2010. - J. R. Hershey, J. Le Roux, and F. Weninger, “Deep unfolding: Model–based inspiration of novel deep architectures,” arXiv preprint arXiv:1409.2574, 2014. - C. Lu, W. Xu, S. Jin, and K. Wang, “Bit-level optimized neural network for multi-antenna channel quantization,” IEEE Commun. Lett. (Early Access), pp. 1–1, Sep. 2019. - C. Lu, W. Xu, H. Shen, J. Zhu, and K. Wang “MIMO channel information feedback using deep recurrent network,” IEEE Commun. Lett., vol. 23, no. 1, pp. 188–191, Jan. 2019. - Z. H. Zhou and J. Feng, “Deep forest: Towards an alternative to deep neural networks,” arXiv preprint arXiv:1702.08835, 2017. - H. Sun et al., “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct 2018. - J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” Annals of statistics, pp. 1189–1232, 2001. L. Breiman, “Bias, variance, and arcing classifiers,”Tech. Rep. 460, Statistics Department, University of California, Berkeley, CA, USA, 1996. - Z. H. Zhou, “Ensemble methods: foundations and algorithms,” Chapman and Hall/CRC, 2012. D. Opitz and R. Maclin, “Popular ensemble methods: An empirical study,” Journal of Artificial Intelligence Research, pp. 169–198, 1999. - R. Polikar, “Ensemble based systems in decision making,” IEEE Circuits Syst. Mag., vol. 6, no. 3, pp. 21–45, 2006. - L. Rokach, “Ensemble–based classifiers,” Artificial Intelligence Review, vol. 33, no. 1–2, pp. 1–39, 2010. - A. Natekin and A. Knoll, “Gradient boosting machines, a tutorial,” Frontiers in neurorobotics, vol. 7, no. 21, 2013. - T. P. Do and Y. H. Kim, “Resource allocation for a full-duplex wireless-powered communication network with imperfect self-interference cancelation,” IEEE Commun. Lett., vol. 20, no. 12, pp. 2482–2485, Dec. 2016. - J. Miao, Z. Hu, K. Yang, C. Wang, and H. Tian, “Joint power and bandwidth allocation algorithm with QoS support in heterogeneous wireless networks,” IEEE Commun. Lett., vol. 16, no. 4, pp. 479–481, 2012. - J. Moon et al., “Online reinforcement learning of X-Haul content delivery mode in fog radio access networks,” IEEE Signal Process. Lett., vol. 26, no. 10, pp. 1451–1455, 2019. - I. John, A. Sreekantan, and S. Bhatnagar, “Efficient adaptive resource provisioning for cloud applications using Reinforcement Learning,” 2019 IEEE 4th International Workshops on Foundations and Applications of Self* Systems (FAS*W), Umea, Sweden, pp. 271–272, 2019. - J. Li, H. Gao, T. Lv, and Y. Lu, “Deep reinforcement learning based computation offloading and resource allocation for MEC,” 2018 IEEE Wireless Communications and Networking Conference (WCNC), pp. 1–6, April 2018. - B. Dai and W. Yu, “Energy efficiency of downlink transmission strategies for cloud radio access networks,” IEEE J. Sel. Areas Commun., vol. 34, no. 4, pp. 1037–1050, Apr. 2016. - G. Auer et al., “How much energy is needed to run a wireless network,” IEEE Wirel. Commun., vol. 18, no. 5, pp. 40–49, 2011. - S. Boyd and L. Vandenberghe, Convex optimization, Cambridge university press, 2004. - M. S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret, “Applications of second-order cone programming,” Linear Algebra and its Applications Journal, Vol. 284, No. 1, 1998, pp. 193–228. - R. S. Sutton and A. G. Barto, Introduction to reinforcement learning, Cambridge: MIT press, 1998. T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,”Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. (ACM), 2016. - Y. Shi, J. Zhang, and K. B. Letaief, “Group sparse beamforming for green cloud-RAN,” IEEE Trans. Wireless Commun., vol. 13, no. 5, pp. 2809–2823, May 2014. - B. Dai and W. Yu, “Energy efficiency of downlink transmission strategies for cloud radio access networks,” IEEE J. Sel. Areas Commun., vol. 34, no. 4, pp. 1037–1050, 2016.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00281.warc.gz
CC-MAIN-2023-06
42,834
177
https://django.fun/en/qa/438512/
code
In Django, how to filter a _set inside a for loop? I have these two models: class Convocacao(models.Model): cursos = models.ForeignKey(Cursos) class RegistroConvocacao(models.Model): convocacao = models.ForeignKey(Convocacao) I get a specific object from Convocacao: obj = get_object_or_404( Convocacao.objects.prefetch_related("cursos", "registroconvocacao_set"), pk=pk, ) Now, while the for loop runs through obj.cursos, I need to filter obj.registroconvocacao_set inside the loop: for curso in obj.cursos.all(): obj.registroconvocacao_set.filter(...filters...)... However, in each iteration of the for loop, obj.registroconvocacao_set.filter() makes a new query to the database, generating thousands of accesses to the database and repeated queries. How do I prefetch obj.registroconvocacao_set to avoid this? You've already prefetched the objects, so iterate through all of them in Python to generate a list of those which you want. For example todo = for o in obj.registroconvocacao_set.all(): if( rejection_condition): continue if( acceptance_condition): todo.append(o) continue ... for filtered_objects in todo: ... Simple cases just have a simple test in the loop, to perform an action or not: for o in obj.registroconvocacao_set.all(): if( condition): ... #do stuff with o or a list comprehension todo = [ o for o in obj.registroconvocacao_set.all() if condition ] Yes, it would be nice if querysets recognised that what they are filtering has already been prefetched so that they could do this internally without hitting the DB again. But they don't, so you have to code it yourself·
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00614.warc.gz
CC-MAIN-2023-14
1,595
16
https://sandbox.ietf.org/doc/draft-tanaka-pce-stateful-pce-mbb/
code
Make-Before-Break (MBB) MPLS-TE LSP restoration and reoptimization procedure using Stateful Path Computation Element (PCE). |Document||Type||Expired Internet-Draft (individual)| |Last updated||2019-09-08 (latest revision 2019-03-07)| |Intended RFC status||(None)| Expired & archivedpdf htmlized (tools) htmlized bibtex |Stream||Stream state||(No stream defined)| |RFC Editor Note||(None)| |Send notices to||(None)| Stateful Path Computation Element (PCE) and its corresponding protocol extensions provide a mechanism that enables PCE to do stateful control of Multiprotocol Label Switching (MPLS) Traffic Engineering Label Switched Paths (TE LSP). Stateful PCE supports manipulating of the existing LSP's state and attributes (e.g., bandwidth and path) via delegation and also instantiation of new LSPs in the network via PCE Initiation procedures. In the current MPLS TE network using Resource ReSerVation Protocol (RSVP-TE), LSPs are often controlled by Make-before-break (M-B-B) signaling by the headend for the purpose of LSP restoration and reoptimization. In most cases, it is an essential operation to reroute LSP traffic without any data disruption. This document specifies the procedure of applying stateful PCE's control to make-before-break RSVP-TE signaling. In this document, two types of restoration/reoptimization procedures are defined, implicit mode and explicit mode. This document also specifies the usage and handling of stateful PCEP (PCE Communication Protocol) messages, expected behavior of PCC as RSVP-TE headend and necessary extensions of additional PCEP objects. (Note: The e-mail addresses provided for the authors of this Internet-Draft may no longer be valid.)
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00595.warc.gz
CC-MAIN-2020-34
1,691
10
https://stackideas.com/docs/easyarticles/administrators/setup/common-installation-issues
code
Common Installation Issues Let's face it, this documentation may or may not work for everyone due to the vast variety of hosting setup and configuration. Nevertheless, we have compiled a list of known common issues that user's encounter to assist you with the installation. Should you face the following errors while uploading the installer through Extension Manager: JFolder::create: Could not create directory Unable to create destination Here are the known solutions to this issue: - Check if the file/folder permission are all Writable. You can view the file permission under System > System Information. Please ensure that all of the folders listed are made writable. - Ensure the Temporary Path is pointing to the correct folder. You can find this settings under Global Configuration of your Joomla site. - There are instances where your disc usage is close to 90%. Because some hosting providers do prevent users from writing additional data on the disk.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687938.15/warc/CC-MAIN-20170921224617-20170922004617-00370.warc.gz
CC-MAIN-2017-39
961
8
http://houseandhome.com/design/diy-exaggerated-baseboard
code
Build up your baseboard. It's a decorator trick that's a cinch. With the strategic use of paint and simple moulding, stacked horizontally, you create the illusion of a hefty, high-style trim. Give it a try today. Print out a materials list and step-by-step instructions to do this Home Depot project.
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036641.11/warc/CC-MAIN-20150601214356-00000-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
300
2
http://stackoverflow.com/questions/13223737/how-to-read-a-file-in-other-directory-in-python/13223867
code
I have a file its name is 5_1.txt in a directory I named it direct , how can I read that file using the instruction read. i verified the path using : import os os.getcwd() os.path.exists(direct) the result was and i got this error : Traceback (most recent call last): File "<pyshell#17>", line 1, in <module> x_file=open(direct,'r') IOError: [Errno 13] Permission denied i don't know why i can't read the file ? any suggestion ?
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121418.67/warc/CC-MAIN-20160428161521-00065-ip-10-239-7-51.ec2.internal.warc.gz
CC-MAIN-2016-18
428
8
https://www.replicante.io/docs/devnotes/main/notes/releasing/
code
This is a summary of the release steps for these Replicante Official tools: Release of the above sub-projects is based on the replidev release commands. The release process with replidev release is as follows: # Prepare the repository for release. # This command will guide you to update changelogs and versions. $ replidev release prep # Commit any changes done during the prep phase. $ git commit . # Run checks to ensure the release is ready. $ replidev release check # Push the release commit (if needed to fix errors raised by checking). $ git push # Once all changes are committed and the checks pass publish the release. # This will also publish any crate/docker image in the project and tag the current commit. $ replidev release publish # Push the release tag. $ git push --tags # Create a new release in GitHub with appropriate description and changelog. Replicante has an official quick start guide to introduce it to people. Not only this provides a basic last-catch test but it is key that the first experience works well at the first try for every user. It is possible to test the quick start guide on the upcoming release before releasing: replidev release checkstep for each component (agents, core, platforms, …). replidev release publish command will push release artefacts to registries: For this to work the appropriate login command must be issued and valid credentials provided. For the time being there is a requirement in release order. I hope in the future this can be removed with the introduction of the Rust SDK. Once all changes are release some extra steps are needed:
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00607.warc.gz
CC-MAIN-2023-50
1,600
14
https://www.trenddirectuk.com/trend-router-cutters/trend-professional-hss-cutters/trend-hss-window-industry-cutters/trend-three-flute-countersink
code
0 items in basket. Items total: £0.00 49/50X8MMHSS Three flute 90 degree countersink 12.7 mm diameter - 8mm Shank 49/50X1/4HSS Three flute 90 degree countersink 12.7 mm diameter - 1/4" Shank WP-VJS/09 Varijig scale 100 degrees metric imperial *REPLACEMENT PART* T5EB Medium Duty 1000w Router 240v | T5 Router UNIBASE (unibase) Universal Sub-base with pins and bush WP-HJ/B/06 Adjustment screw M5x10 csk H/JIG/B *REPLACEMENT PART* WP-MT/07 Tilting back plate MT/JIG *REPLACEMENT PART* Sign up to our newsletter to receive up to date news and special offers. Copyright © 2018 All Rights Reserved - Trend Direct UK Item(s) added to cart
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00627.warc.gz
CC-MAIN-2018-13
635
12
https://help.cuckoo.co/en/articles/6142620-how-can-i-check-my-speed
code
The average download speed on our network depends on the plan you're on. Our 80Mb 'Fast' plan using Fibre To The Cabinet technology is typically 60-70Mb/s (average upload speed of 15-20Mb/s). However, if you're on our 'Really Fast' full-fibre 115Mb plan the average national speed is 98Mb/s (with an average upload speed of 20Mb/s). Customers on our 'Eggceptional' 1Gb full-fibre plan get an average of 900Mb/s and 115Mb/s upload. You may see higher or lower speeds than this depending on your postcode and how far away from the green cabinet or exchange you are. 🤞 Testing your speed We recommend using a public, free to use checker like these: There are four main reasons why speeds can vary between postcodes: Your home data usage - for example, video streaming eats up a lot of capacity Your home network set up - for example, effective placement of your router can aid speeds Our wider network usage - latest iOS or game releases can really impact download speeds across the network Distance from the cabinet - if your home is quite far from the green cabinet (called a DSLAM or PCP cabinet) on your street, your speeds may be slower If you are having trouble with your connection speeds and want to improve them, see here.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00019.warc.gz
CC-MAIN-2022-40
1,231
11
https://www.optnation.com/front-end-developerg4r5y-remote-jobs-in-seattle-and-and-wa
code
OPTnation offers training and placement services with best job placement. Get industry level practical training and get hired. Know more.Get Now If your resume is not getting shortlisted for interview then you have to have a check whether your resume speaks about you and your skills or not. Know more.Get Now Resume Blast service is the fastest, most effective method of putting your resume in front of thousands of employers and getting instant attention.Buy Now Your resume will be highlighted and displayed on top of the search results for our employers, giving you a greater visibility for the next 15 days.Buy Now Optnation will Boost Your Profile for better visibility to the recruiters. Fill The details below and our experts will contact you. Sort By : New York, NY Los Angeles, CA Less than 1 Year 1 to 2 Years 2 to 3 Years 3 to 5 Years 5 to 7 Years 7 to 10 Years 10 to 15 Years More than 15 Years Digital Technologies LLC Precision Technologies Corp Pav IT Global Register on optnation.com and start your Job Search Now! 25 April 2016 SAP BASIS (Business Application Software Integrated Solution) is a middleware programs set. The responsibilities of SAP BASIS includes printing/spooling configuration and administration, creating and restoring data back-ups, managing the database space allocation, to create role using different methread more.. 18 July 2019 The Donald Trump administration in the US is proposing a nearly five-fold increase in merit-based legal immigration and half those based on family and humanitarian system, in an effort to overhaul the outdated system. Senior presidread more.. 19 April 2016 ASP .NET is the successor of Active Server Pages (ASP) technology developed by Microsoft and is a server-side web application. It is built on CLR and has an advantage of writing ASP .NET code using any supported .NET language. It was released in January 2002 first. To excel in career, candidates takread more.. 08 August 2019 U.S. visa policies are discouraging foreign tech workers from working in startups, according to a study by Cornell University and UC San Diego researchers. The study examined the hiring of foreign workers educated in science and engineering at American universities. These workers apply to and getread more.. We’re an equal opportunity provider. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. OPTnation.com is not a Consulting Company/Training Company/H1B Sponsor.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00228.warc.gz
CC-MAIN-2020-40
2,556
31
https://help.doit-intl.com/docs/cloud-analytics/reports/schedule-report
code
Schedule report email delivery You can send a copy of your Cloud Analytics report to yourself or your stakeholders on a regular basis by setting up an email delivery schedule. Required Permissions: Cloud Analytics Create a scheduled email delivery There are two ways to schedule emailed reports: - Via a specific Report's page - Via the main Cloud Analytics page Scheduling via a report's page Begin by opening a report that you have "Owner" access to. Then, click the clock icon on the right-hand menu bar: Next, configure the scheduled delivery: - Set other users, you'd like to be included on report's distribution - Optionally, update email's subject and the message to provide more context for recipients - Set up when you'd like to be delivered (using cron syntax) If you've included someone who doesn't have access to the report, you will be asked to add that user to the report. You can choose whether to grant the "Viewer" or "Editor" access. About Cron expressions The delivery time and recurrence of a Cloud Analytics Report can be configured by using a cron expression. Cron is a time-based job scheduler originally used in Unix-like computer operating systems. A cron expression is a string comprising of five or six fields separated by whitespace. The following table defines the fields of a cron expression and the possible values for each field. |Minute||Hour||Day||Month||Day of the week| |0–59||0–23||1–31||1–12 (Jan-Dec)||–6 (Sun-Sat)| In addition to using these values, every field in a cron expression can also use special characters: |any||If the day of month field, day of the week field are each set to | |range||If the day of week field is set to | |list||If the month field is set to | |step||If the month field is set to | Schedule intervals lower than daily are not permitted with Cloud Analytics, meaning the first 2 places must be numbers between (–59) and (0–23) To create a schedule that repeats, use special characters to describe when that schedule is to repeat. For example, the cron expression 30 8 * * Mon-Fri configures a schedule to start at 8:30 AM on every Monday, Tuesday, Wednesday, Thursday, and Friday. Select the access level you'd like to give them, and click "Add" to give them access to the report and add them to the list of scheduled report recipients. Your scheduled report will look something like the image below. From the email, you'll be able to preview the report and open an interactive report in Cloud Analytics, by using the "Open Live Report" button. Scheduling via the Cloud Analytics page You may also schedule emailed reports from the main Cloud Analytics page. First, find a report that you are the owner of. Then, select the horizontal ellipsis icon in the right-most table column, and select Email Schedule from the drop-down menu. From there, configure your report as described above, modifying the message and interval as well as the recipients. Updating scheduled delivery To update the scheduled email delivery configuration, open a report that has a configured schedule and select the blue clock, like before. You may can also update a report's email delivery schedule directly from the Cloud Analytics screen by selecting the horizontal ellipsis, like before. Subscribing to other people's scheduled deliveries Using the same methods as above, you can also subscribe to other people's scheduled deliveries. Deleting scheduled delivery If you need to delete the scheduled report, please use the Delete button on the "Schedule Report Email Delivery" dialog. A few limitations exist for scheduled reports: - Each report can only have a single email delivery schedule - Preset reports cannot be scheduled. You can clone the report though and schedule it. - The person who scheduled the report is always included in the email - You can't schedule a report to be delivered more than once a day. - Only chart-based reports can be scheduled (i.e. no tables or heat maps at this time)
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00682.warc.gz
CC-MAIN-2022-33
3,962
44
https://flowingdata.com/2011/01/17/a-guide-for-scraping-data/
code
Data is rarely in the format you want it. Dan Nguyen, for ProPublica, provides a thorough guide on how to scrape data from Flash, HTML, and PDF. [via @JanWillemTulp] A guide for scraping data Projects by Nathan Yau See All → Unemployment in America, Mapped Over Time Watch the regional changes across the country from 1990 to 2016. After Marriage, How Long People Wait to Have Kids First comes love, then comes marriage, then comes baby in the baby carriage. Sometimes. The Change My Son Brought, Seen Through Personal Data I combed through personal data that I’ve actively and passively collected since early graduate school to see how life is different now with a 6-month old.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255837.21/warc/CC-MAIN-20190520081942-20190520103942-00312.warc.gz
CC-MAIN-2019-22
682
9
http://ergodica.blogspot.com/2008/11/sometimes-you-sweat-small-stuff.html
code
Sometimes you sweat the small stuff I am going through one of those "for want of a nail" periods in my life. It's not that a kingdom, my kingdom, is on the verge of being lost. The scenario is more that there are things that I want to get done. They seem to be simple things. At least they seem simple until I actually try to accomplish them. My options breakdown to: A. throw money at the problem, or B. solve a series of very small interdependent problems that lead to other interdependent problems that eventually lead to the ability to get things done. And the internal dialogue goes like this: "Ok. Deep breath. This is totally doable. If I want to save the kingdom what I really need is to win this battle which requires me to have that rider which means getting this horse which means getting that horseshoe replaced. Yeah, I'd better get online and find a blacksmith to take care of this. Am I bringing my own horseshoes? Maybe I'll bring one just in case. And if I don't end up replacing them all, I really should ask him to check the nails on the rest of them, I wonder if yelp.com has any entries on blacksmiths ..."
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00155-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,127
5
https://facilityexecutive.com/tag/svigals-partners/
code
Svigals + Partners Resource pages for "Svigals + Partners"-related posts for facility managers (FMs), building operations professionals and decision-makers in all industry sectors. New Haven Innovation Labs is a recently renovated incubator space for startup bioscience research organizations, located inside the historic John B. Pierce Laboratory building. The biophilic approach at YCSC, shown in studies to produce positive behavioral changes, is designed to instill a sense of calm and comfort.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00562.warc.gz
CC-MAIN-2023-06
498
4
https://www.fatwa-online.com/foot-and-mouth-disease/
code
Fatwa-Online has just been informed that Shaykh ‘Abdul-‘Azeez Aal-ash-Shaykh made the following announcement: It is permissible for Muslims living in Europe not to sacrifice animals during this ‘Eed al-Adhaa festival because of the foot-and-mouth disease. If Muslims in Europe find themselves in a situation where they are prevented from sacrificing a beast after what has been said about the state of the animal, they must abide by the rules. The sacrifice of an animal is not an obligation for Muslims but it is a Sunnah. Those who have the means to sacrifice an animal can do it. For those who do not, it is not obligatory.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296815919.75/warc/CC-MAIN-20240412101354-20240412131354-00382.warc.gz
CC-MAIN-2024-18
632
2
http://www.dbforums.com/showthread.php?996215-Help%21-Need-ideas-for-linked-Excel-table
code
Unanswered: Help! Need ideas for linked Excel table I posted this question earlier, but did not get replies. I hope someone would help me out. I have a linked Excel spreadsheet table which contains monthly sales data by customers and by product (ie, Customer --> Product # --> Month --> Rev$). I have to link this in a query in Access that also links several Access tables (Customer, Purchasing Organization, Contract, etc). To preserve data integrity, a unique record in the sales table would have to have a composite primary key, composed of a customer#, a contract#, the product#, and month of sale. I cannot define any of these in the linked Excel table. I do not wish to do an upload as: (1) the amount of data will continuously increase, and hence overload the database; and (2) since the data is funneled through several enterprise systems before it reaches Excel, I have run into the problem of data typing when I do attempt to get it into Access. A link serves me best, I think, and I would like to work with this, if I can resolve the problem outlined above. Ok. I have a linked spreadsheet that shows up in the Access database window. This contains sales data. If I now open the table in design view, I first get a Windows message saying: Table "ABC" is a linked table with some properties that can't be modified. Do you want to open it anyway? If I say yes, it will open in design view. Now, if I define the primary keys (Access will seemingly allow you to do this) and then attempt to save the table, I have another Windows message box that says: Database can't save property changes to linked tables. Do you want to continue anyway? Irrespective of your answer, the changes you make (ie, defining primary keys) will be discarded. This is what I meant. I need to have the PK's defined and also the relationship in the query that links several other tables so that the report that comes off of this works correctly. I cannot define the PK's and I also cannot define the relationship to preserve referential integrity.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00489-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,029
11
https://users.rust-lang.org/t/tokio-async-std-compatibility/102617
code
As far as my understanding goes - both async_std and tokio are different runtimes and are incompatible. Currently I have a lot of code built on top of tokio and it's various facilities, and recently I was looking at a new technology however, it's built on top of async_std. Is it reasonable to pull in this new technology even if it uses async_std instead of tokio? I assume in order for this work I'll need a separate runtime just for the async_std stuff and then add some sort of shim/communication layer if I wanted this to talk to the existing tokio stuff. Does this seem reasonable/doable? Or should I just drop the new technology since integration async_std and tokio would be too much to maintain. There are workarounds when you need two async runtimes on the same project, but before taking you that route it would be better to know exactly what are you trying to do. Pulling a second runtime just for some new functionality is not something most would recommend. The mentioned tech is a library called "Zenoh". It's basically a pub/sub&RPC framework for low latency network communication. It's written in Rust and has a crate available to use - but it is built on async_std. Pulling a second runtime just for some new functionality is not something most would recommend. Makes sense - but if this had to be done what would be the best way to go about it? All of the other network I/O is done via tokio (or more specifically libraries that use tokio under the hood). So would I just keep all of the "Zenoh" comms separate and just do message passing between the two?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00229.warc.gz
CC-MAIN-2023-50
1,574
8
https://superbasic-manual.readthedocs.io/en/latest/C/check-pct.html
code
CONTROL (DIY Toolkit Vol E) Coercion is the process of converting a string which holds a number into the actual number. It is a powerful in-built feature of SuperBASIC. This allows you to create input routines such as: 100 dage% = RND(10 TO 110) 110 INPUT "Your age [" & dage% & "]?" ! age$; 120 IF age$ = "" THEN 130 age% = dage%: PRINT age% 140 ELSE 150 age% = age$: PRINT 160 END IF Although SuperBASIC coercion is very powerful, it does have its limits when non-numeric strings are entered. If age$ was “44”, age%=age$ will assign 44 to age%. Even if the string was not really a number, eg. “44x5”, SuperBASIC will simply ignore everything behind legal characters (ie. age%=age$ would assign 44 to age% still). However, if age$ contained something like “no thanks” it cannot be coerced and the program will fail with an ‘error in expression’ (-17). The function CHECK% exploits the fact that SuperBASIC is obviously able to see the difference between a valid number or what comes close to that and nonsense. CHECK% carries out an explicit coercion for integer numbers: it will try to make a number from the supplied parameter in the same way as SuperBASIC would. However, CHECK% will not stop with an error for unusable strings, instead it returns -32768. Although “-32768” is converted correctly to -32768, this value must be reserved because the program cannot know whether the input was illegal or -32768. Let’s rewrite the above example for coercion with CHECK%. We have to replace the implicit coercion age%=age$ with age%=CHECK%(age$) and put INPUT into a loop: 100 dage% = RND(10 TO 110) 110 REPeat asking 120 INPUT "Your age [" & dage% & "]?" ! age$; 130 IF age$ = "" THEN 140 age% = dage%: PRINT age% 150 ELSE 160 age% = CHECK%(age$): PRINT 170 IF age% > -32768 THEN EXIT asking 180 END IF 190 END REPeat asking WHEN ERRor can trap the coercion failure. See the Coercion Appendix also.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00862.warc.gz
CC-MAIN-2023-50
1,920
10
https://www.routeprotocol.com/vlan-access-control-list-vacl/
code
A VACL can filter traffic bridged within a VLAN or routed in and out of a VLAN. ! Block ICMP ip access-list extended ICMP deny icmp any any ! Forward everything else ip access-list extended OTHER permit ip any any ! Construct the access map vlan access-map VACL_10 10 match ip address ICMP action drop vlan access-map VACL_10 20 match ip address OTHER action forward ! Apply VLAN filter vlan filter VACL_10 vlan-list 10 To create and apply a VLAN map: Define a VLAN access map using the command vlan access-map <name> <sequence> Configure the match statement using the command match ip address <acl-number/name> Configure the action to take with the command action followed by Activate the VACL with the command vlan filter <access-map> vlan-list <vlans> When crafting an access control list be used with a vlan access-map, only use permit statements. This is because the access control lists are only used as a matching criteria for the match statements and do not actually take any action on the packet being evaluated.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.79/warc/CC-MAIN-20240227153053-20240227183053-00853.warc.gz
CC-MAIN-2024-10
1,021
12
https://techcommunity.microsoft.com/t5/microsoft-365-insider/behind-the-scenes-empowering-visual-expression-with-office/m-p/3285191
code
Growing up in Australia, Aimee Leong was intrigued by deep questions. “I thought I wanted to be a theoretical physicist,” she said. “I just thought that questions around things like dark matter were super interesting.” But, like many millennials, her imagination was captured by the rise of creativity in personal technology. She had an eye for design, even in something as simple as a to-do list app. “There was an app called Wunderlist that had an amazing user experience,” she said. “The animation made you feel very celebrated. So, I started following blogs from some of the people who created this app. I found it fascinating, how technology could make your world more enjoyable and more beautiful.” Aimee quickly realized that what she really wanted to do in her career was help create those kinds of beautiful products and experiences. So, after graduating college, she joined Microsoft and moved into a Product Manager role on the Office Graphics team. “We are building things at the intersection of productivity, self-expression, and creativity, which is what I’m really passionate about,” she observed. We spoke with Aimee about her cutting-edge work on Office graphics, how responsible AI impacts that work, and the role that the Office Insiders program plays in releasing high-quality products at Microsoft.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816535.76/warc/CC-MAIN-20240413021024-20240413051024-00877.warc.gz
CC-MAIN-2024-18
1,341
7
http://cknotes.com/parsing-extremely-large-xml-files/
code
I’m trying to read a very large xml file. It’s about 500MB. The content is a list of records. There are about 100,000 nodes with the same tag that contain single records. Is there a limit on the file size or number of nodes that can be processed? There is no limit other than running out of memory. Using a DOM style XML parser for extremely large XML files containing a huge number of elements (nodes) is not a good choice. A DOM style parser (Document Object Mode) is where the entire XML document is loaded into memory and stored in some sort of document object model. A better choice would be to use a SAX style parser. See https://en.wikipedia.org/wiki/Simple_API_for_XML In my opinion, a format such as XML should never be used for huge datasets. The original mistake was when the software architect decided to use XML as the data storage format. Repeating the same opening and closing XML tags severely bloats the data and imposes huge memory and database storage requirements that could’ve been avoided.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203947.59/warc/CC-MAIN-20190325112917-20190325134917-00517.warc.gz
CC-MAIN-2019-13
1,017
4
https://www.odesk.com/o/profiles/browse/c/writing/fb/0/skill/android-sdk/
code
Android SDK Job Cost Overview Typical total cost of oDesk Android SDK projects based on completed and fixed-price jobs. oDesk Android SDK Jobs Completed Quarterly On average, 44 Android SDK projects are completed every quarter on oDesk. Time to Complete oDesk Android SDK Jobs Time needed to complete a Android SDK project on oDesk. Average Android SDK Freelancer Feedback Score Android SDK oDesk freelancers typically receive a client rating of 4.62. Results-oriented software developer with 4 years of experience. Looking for interesting challenging projects that would involve all my skills to accomplish them. Passionate of creating perfect code. I am constantly striving to learn new technologies and looking for ways to improve myself in the rapidly changing industry. My current goal is to create wonderful mobile applications that would make people life easier. Addicted to programming and technology. Extended knowledge in related areas makes contribution on the final product quality, because everything in the IT is related. I am seeking to use my extensive skills base in either a mobile application development role (purely Android SDK), where my skills can be utilised to the utmost. Programming Languages/ Technologies • Java Development : 4 years. RDBMS • My SQL Innodb • Sql Lite • MongoDB Frameworks • Otto • ButterKnife • Android SDK • Spring for Android • Social frameworks • WebRTC (Web Real-Time Communication) is an API definition drafted by the World Wide Web Consortium (W3C) that supports browser-to-browser applications for voice calling, video chat, and P2P file sharing • and Others Application/Web Servers • Apache • IIS Development Tools • Aptana & Plug-ins • Xcode • Eclipse • Intelij Idea • Primal Script Testing Tools • Hokey App Creating and supporting fully completed decisions for CI, based on huge variety of CI tools from 2012 year. My #1 goal will always be to meet your needs and deadline. I'm honest and fair. My job consist of automation of the following processes: compile and analyze source code, run tests, create version system, software release management and other software development processes. I have worked with iOS applications, Android applications, BlackBerry and Java projects. Configuring: Jenkins, TeamCity, Cruise Control, TFS Most used tools: Jenkins, gradle, apache ant, bash, powershell, TeamCity. Also have a huge experience in system administration (Linux, Windows, MacOS); MySQL; Web Servers (nginx,apache); Application servers (WebSphere, Tomcat, Jboss); System monitoring (Zabbix); SonarQube source quality management platform; Git support, SNV support, Atlassian stack (such as Jira an so on). Hi I'm a software developer with five years of experience, skilled in C/C++, Java, and Mobile Programing (Android and Iphone). I also worked a lot with Layer Service Provider(LSP) for network filtering and developed kernel drivers (Pnp, filter drivers) in Windows platform. I consider C/C++ my main strength since I worked for a known security company where I used those languages, but I enjoy to develop Android applications too. Thank you. 20+Years Experience in the Industry.Best Content Creation/Delivery Techniques.Content Optimization is the trademark.Best Output time execution.I have a combined graduate-level background in engineering and computer science. As an educator, I have 10+ years of experience in communicating technical information to diverse audiences using a variety of media. I have served as a reviewer for various conferences, as well as journals and a textbook. I have academic publications in topics including artificial intelligence, digital image processing, GoPro Hero cameras, metallography, and online education. I am here to show my Software development skills in Android and Java. My freelance projects on vworker.com were mainly on Java Swing based app development, PHP webpage scraping, Java game development : Artificial intelligence. I have added skills in Android by working for Amazon.com past 2 years. I am basically inclined to Android projects since that is my recent skill. I would like to also contribute to my previous skills based projects too. In past few months I have worked on Android and I have developed many android applications using Eclipse,in which there were two main applications ,Virtual Psychotherapist using (AI) and enhancement in game Tic Tac Toe. I have also developed many desktop applications using JAVA & C++,which include Shopping Mall Management Project. I am also having Experience in HTML,CSS & PHP.
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448956264.24/warc/CC-MAIN-20150501025556-00045-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
4,572
14
https://www.thegeekdiary.com/issue-opening-a-firewalld-port-in-centos-rhel-8/
code
We have opened a new port or added a service in firewalld fail without error. In the server, port 80 is opened as per below output: # firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: ens3 sources: services: cockpit dhcpv6-client ssh ports: 80/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: But when trying to connect from another host, below error is reported: $ nc -v [SERVER_IP_ADDRESS] 80 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: No route to host. By default, the firewalld backend is configured to nftables. Direct rules used by firewalld might impact the way the rules are applied: Direct rules that ACCEPT packets don’t actually cause the packets to be immediately accepted by the system. Those packets are still subject to firewalld’s nftables ruleset. For direct rules that DROP packets, the packets are immediately dropped. If a general DROP or REJECT rule is configured as the last of direct rules, it will cause all nftables rules to be ignored. The last line in the following command is one example: # iptables -vnxL INPUT Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 2133 309423 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 27 1620 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 10 524 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 93 4740 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited It can also be verified that it is configured in firewalld direct rules: # grep -B4 INPUT /etc/firewalld/direct.xml <?xml version="1.0" encoding="utf-8"?> <direct> <passthrough ipv="ipv4">-N BareMetalInstanceServices</passthrough> <passthrough ipv="ipv4">-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT</passthrough> <passthrough ipv="ipv4">-A INPUT -p icmp -j ACCEPT</passthrough> <passthrough ipv="ipv4">-A INPUT -i lo -j ACCEPT</passthrough> <passthrough ipv="ipv4">-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT</passthrough> <passthrough ipv="ipv4">-A INPUT -j REJECT --reject-with icmp-host-prohibited</passthrough> Verify if direct rules are really necessary, probably the important rules are already configured in “normal” rules. To completely remove direct rules, remove the file /etc/firewalld/direct.xml. # mv /etc/firewalld/direct.xml /etc/firewalld/direct.xml_bck If direct rules are needed, remove the last resource rule, with REJECT, in the direct rules and configure it in nftables/firewalld.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100287.49/warc/CC-MAIN-20231201120231-20231201150231-00699.warc.gz
CC-MAIN-2023-50
2,579
13
http://viplooters.xyz/archives/4922
code
Marvellousfiction Top Tier Providence, Secretly Cultivate For A Thousand Years – Chapter 27 crack knotty to you-p1 Novel–Top Tier Providence, Secretly Cultivate For A Thousand Years–Top Tier Providence, Secretly Cultivate For A Thousand Years Chapter 27 stew cover McKettrick: An Outlaw’s Christmas Han Jue held his breathing and imagined silently. Han Jue waved his right-hand expressionlessly. It was actually Xing Hongxuan. Xing Hongxuan turned and smiled. “Don’t worry. I’m also reluctant you will appeal to the attention of other ladies.” Chen Santian had extended let down his defend. He believed Han Jue would always keep him still living for several function. He didn’t expect to be destroyed now. If information of the would propagate, plenty of people will be worried to death. One half on a monthly basis later. Mouser Cat’s Story Xing Hongxuan waved her right hand, and ten medicinal containers showed up on the floor. Han Jue began to cultivate his wind power cultivation possibilities. the voyager and other poems Han Jue added every little thing into his Little World Buckle. bastien lepage les foins The second Chen Santian’s corpse fell to the ground, a light baseball emerged from his entire body. Chen Santian was going through away from Han Jue as he frowned and imagined. Could it be… Samantha Among the Brethren Half monthly later. The cave house was silent. what does the woman in the book of revelation mean Chen Santian experienced already calmed down. Afterward, her concept altered considerably, and she expected with a trembling tone of voice, “Could it be that you’ve already…” Han Jue needed to vomit our blood. [Zhang Kunmo has hatred towards you. Recent Hatred: 5 superstars] the history of education in america Chen San was in a issue. When he idea of his defeat a year ago, he shuddered. rupert of hentzau Translator: Atlas Studios Editor: Atlas Studios This period, it was actually a lot more basic than a year ago. Chen Santian didn’t also have some time to dodge. If I’m sufficiently strong, I’ll definitely make you pay off one hundred times in excess of! Xing Hongxuan immediately jumped towards him, but he obstructed her with his mindset strength.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00049.warc.gz
CC-MAIN-2023-14
2,226
35
http://support.elliott.com/knowledgebase/articles/1891585-how-to-use-report-desk-admin-perspective
code
Release Date: 12/12/2018 Report Desk provides a powerful developing environment for Netcellent to deliver modern reports with rich elements of proportional spacing fonts, graphics and line draws. It allows us to output reports to PDF, CSV and XLSX formats. In addition, Report Desk uses the PSQL relational engine to access the Elliott database. With Elliott V8.6, use <ElliottRoot>\bin85\DDF2BTR.EXE Please go to http://support.elliott.com/knowledgebase/articles/850704-when-and-how-to-use-ddf2btr-exe-utility for more information on how to use this utility. If you have previously converted your database with the DDF2BTR.EXE to make your data compatible with third party relational applications like Crystal Reports or Web Services, then there’s no need to convert it again. If you are not sure if you have previously done so, call Netcellent to confirm. Determine Which Users Can Use Report Desk The following flag is new in Password Setup -> User Global Security -> Screen 7:: By default, no users will be allowed to run Report Desk reports. You can individually allow users access to Report Desk by entering "Y" to field 1 on this screen. Alternatively, you can go to Global Default Security, and set the default value for this option to "Y", and then all users not specifically denied access here will be able to use Report Desk. Keep in mind that access to a Report Desk report requires access to the menu item from which the report gets launched. Make Limited Number of Report Desk Reports Available for Users Even though you can limit the users who can access Report Desk's reports with above security flag, you may not be ready to make all of the reports available to these users. You can limit the users' access to Report Desk's reports by changing EL850U.CFG file. See the following KB articles for more details: Determine Which User Can Modify Report Desk Report The following flag is new in Password Setup -> User Global Security -> Screen 6: Allow to Modify Report Desk’s User Def Report - The default value is N. This flag will determine if a user can modify a Report Desk report. Initially, you may only want to give your admin user this ability. Once you are more familiar with the functionality of Report Desk, you can decide who should have this flag turned on. Match Your Database Version with The Right Database Name (DSN) By default, Report Desk uses databases like ELI86DATA??, where ?? is the Elliott company number. These databases are created for you automatically during the installation of Elliott V8.6. ELI86DATA?? is based on the V8.5 DDFs, which have the document number defined as a string. If your database is still in Elliott V8.2 format, then you should use database ELIDATA?? instead. You can make this change with the Config button on the toolbar if you login as an Elliott SUPERVISOR. Or you can browse and execute <ElliottRoot>\Bin85\EL850CF.EXE if you login as a Windows admin user. Once in the configuration application, select the Databases tab. The databases names that are red need to be added to the PSQL Control Center. The databases names that are black currently exist. Clicking the Create Databases button will create the databases as shown. If your database has not been converted to the V8.5 format, you can change it to the V8.2 naming convention. Click on the database name and pick the V8.2 name format from the list. In the example above you would select ELIDATA13 instead of ELI85DATA13. Once selected, the format will change to V8.2. Creating the Matching Database Name (DSN) Next, click on Create Databases to start the Elliott Database Creation program. Keep in mind that you must do this on the PSQL server and login as a Windows admin user. Alternatively, you can also navigate to execute the following programs <ElliottRoot>\Bin85\EL850DB.EXE. If you are still using Elliott V8.2 database format, click on the option Create 8.2 Databases. A list of database names that have not been created using the standard Elliott V8.2 naming convention are displayed in the left panel. A list of database names that have already been created is displayed in the right panel. Choose Check All to check all of database names. Choose Uncheck All to uncheck all of the database names. Once you have selected the databases to be created, click on Create Database(s). This will create the checked databases. The database names will be removed from the left panel and shown in the right panel. Once all of the databases have been created for the correct version, choose Exit to return to the Elliott V8.6 Configuration utility. Click Finish to complete the changes. Once a user has been given rights to modify Elliott Report Desk User Design Reports (UDR), they will have access to two additional buttons on the report parameter screens. SQL: This button will display a screen with the SQL statement that will be used to retrieve data for the report. Users can temporarily modify the SQL statement and test it with the Test SQL button. The Test SQL button will test the SQL statement shown. A window with result information will be shown after the statement is executed. The Copy to Clipboard button will copy the SQL statement to the clipboard for use outside of Elliott. The Exit button will exit the screen. Modify: This button will take the user to the User Defined Report Designer screen. There are two pieces of information that are required to create a UDR: 1. A template. The template defines the primary table(s) for the report, the basic SELECT statement for the report and, optionally, any predefined SQL formulas that can be used. The primary purpose of the template is to define the joint relation of multiple tables which is critical for the performance. For this reason, only Netcellent or your developer can define a new template. A template can be used by multiple reports that can share the same basic SELECT statement. A template can have more than one variation -- joining different tables, for example. A template consists of a name and a sequence number, like APVENLST.1 or APVENLST.2. 2. A report definition. The report definition defines the parameter input criteria for the report, the columns in the report and, if allowed by its template, an ORDER BY clause for the report. A report consists of a template name, template sequence number and a report sequence number, like APVENLST.1.1.0 or APVENLST.2.1.0. From the User Defined Report Designer screen, users can create their own version of the report with the information that is most important to them. Title: This is the title shown on the report when it is rendered. Where: The columns specified in this section are used to generate the criteria on the report parameter screen. They are also used to generate the where statement used to retrieve the data. - To add a Column, drag a table column from the Available Columns TreeView below. - To delete a Column, right click on the row and select Delete Row. - Rearrange the order of the Columns by dragging within the grid. Note that the order of these WHERE clauses can affect performance. - To change the Operator, click on the value and use the drop down to select an operator. This is a list of SQL-supported operators for WHERE clauses. - To change the Prompt, click on the value and type a new value. - To change the Type, click on the value and use the drop down to select a new value. In addition to String, Date and Number, there are some special types, like CUS_NO. These special types provide additional functionality, like right-justify and zero fill if numeric. Order By: This is the sort order for the report. If no order by is specified, the report will print in order of the primary key. - Add a Column by dragging a table column from the Available Columns TreeView below. - Delete a Column by right-clicking on the row and selecting Delete Row. - Rearrange the order of the Columns by dragging within the grid. - Change the ORDER BY Sequence (ASC or DESC) by using the drop down list. Available Columns: The Available Columns TreeView is populated by all the tables specified in the SELECT statement of the template along with [Date], [System] and [Formula] nodes. Expand a node to see what columns are available. You can drag an entry from Available Columns to the Report Columns and change the column heading. You can also drag an entry to the Where grid (for specifying report input parameter criteria). Report Columns: Columns shown on the report when it is rendered.Add a Column by dragging a table, date, system or formula column from the Available Columns TreeView on the left. Delete a Column by right-clicking on the row and selecting Delete Row. Rearrange the order of the Columns by dragging within the grid. The Column Heading will default to the most popular heading for the column. You can use the drop down list to select a different one, or enter an entirely new one. The headings in the list appear in the order of most-to-least popular from top to bottom. Each time someone saves a design, the changed column values will be added to the list of popular headings and the popularity usage increased. Length defaults to the database width when a column is first dragged to the grid. You may change the length depending on how much room you want the column to take on the report. The Format value should be left blank for strings. You also may choose one of the following for other types: Show SQL: This button will show the SQL statement that is used to retrieve the data including the where clause created by the fields included in the where grid. Test: This option will allow the user to test the report changes without saving the design. Save: This option will save the report design. After the design is saved, the revision number on the report will change. For example, if you change the design for ARSHPLST.B.1.2.0, it will be saved as ARSHPLST.B.1.2.1. Zero revisions are the standard report definition provided by Netcellent or your developer when Elliott is installed or updated. Numbered revisions represent custom versions created by users. After exiting the screen, the new custom report definition will be available. Save New: This option will save the report design but will increment the revision number. For example, if you change the design for ARSHPLST.B.1.2.2, it will be saved as ARSHPLST.B.1.2.3. Delete: This option deletes user defined reports. Base reports provided by Netcellent cannot be deleted. Exit: Choose this option to exit the User Defined Report Designer screen. Additional Information on Report Desk ID: For each UDR report , it has a unique ID with following format: NAME.X.1.2.3 where: - NAME is the name of your UDR report, in this case ARTRMLST. - X can have the value of B (Base) or E (Enhancement). Base means this report was originally created by Netcellent, and Enhancement means this report was created by your developer. - 1 - the first numeric digit represents the template ID. A template usually represents a unique way of joining the tables. As an end user, you can't create your own template because the joining must be done by Netcellent or your developer to ensure best database performance. - 2 - the second numeric digit represents the different types of report options derived from the same template. This can be different sorting sequence, different input selection options, or different columns to be included on the report. - 3 - the third digit is the variance of each type of report. The value zero means this is the original report developed by Netcellent or your developers. Other values (greater than zero) are revisions of the report made by you. You can, for example, change to different sorting, selection and columns options and save your own version of the report. Netcellent or your developer may change the reports that end with zero in the future, but your derived reports that do not end with the zero will never be overridden.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00307.warc.gz
CC-MAIN-2020-45
11,907
78
https://shagility.nz/making-reports-faster/
code
We are using a combination of SAS Web Report Studio, for report creation/rendering and an Oracle database for data storage. When users run reports, they effectively invoke this process: - Connect to metadata and obtain authorisation - Access info map and generate query - Pass query to Oracle - Oracle passes data to WRS - WRS renders report To make this faster we have been looking at configuring report scheduling to see if we can pre-cache the reports
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00891.warc.gz
CC-MAIN-2023-50
454
8
https://www.reddit.com/r/linux_gaming/comments/c7od6d/civilization_vi_no_longer_works_in_linux/
code
Ryzen 2700x / 2080ti. It stopped working on Pop OS over the last week or so. It boots to the opening screen with no menu. I switched to Ubuntu proper on the same machine with the same results. Lots of similar comments in the Steam chat. Pisses me off 😡 Feral where are you? If you're going to stay in business in the age of Proton your games need to perform at least as all as SteamPlay! Canada Day weekend watching friends playing without me. It's still working well for me on debian, perhaps something broke with newer drivers or kernel? *Edit: There are some workarounds for other issues on the arch wiki that mention libfreetype. If text isn't showing up on the main menu then it could be related to that lib. With the libfreetype issue, the game will refuse to even start, won't even show the launcher, and running it from the terminal gives one of the following errors: ./GameGuide/Civ6: symbol lookup error: /usr/lib/libfontconfig.so.1: undefined symbol: FT_Done_MM_Var ERROR: ld.so: object '/usr/lib/libfreetype.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS64): ignored. Source: I had that issue. I don't have any solutions, but I feel you. One time Civ 5 randomly decided to become unplayable for me, because of some compatibility issue with glibc. Worst part is, I didn't find out about the problem until a 2-almost 3 day power outage where I couldn't burn time playing civ.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525659.27/warc/CC-MAIN-20190718145614-20190718171614-00532.warc.gz
CC-MAIN-2019-30
1,408
13
https://docs.microsoft.com/en-us/archive/blogs/ericlee/windows-vista-for-extreme-programming
code
Windows Vista for Extreme Programming? As you can probably tell, the bright shiny lights of the soon-to-be-released products like Windows Vista and Office 2007 have caught my eye lately. I promise I’ll be back to Team Foundation Server blogging soon. I’ve been playing with a feature in Windows Vista that I thought might be useful for development teams who are practicing Extreme Programming. The feature is called Windows Meeting Space and is one of the out-of-the-box applications that takes advantage of Windows Vista Peer-to-Peer networking and more specifically, People Near Me. Both Peer-to-Peer networking and People Near Me are vast areas that I don’t fully understand; Justin Smith does a nice job of talking about the technology in his recent MSDN article. In terms of leveraging what is already there, Windows Meeting Space struck me as an easier to use netmeeting/netsharing/livemeeting technology. It seems perfectly suited for a quick code review, or paired programming session. I found Windows Meeting Space from the handy search dialog in the start menu: Starting a meeting is pretty easy – easier than most of the Microsoft collaboration tools I’ve used. Basically pick a name and a password. I like how Windows Meeting Space puts the time into your name automatically. You could also search for existing meetings. Let’s say that we’ll create a code review meeting. Once created, you can invite people from the menu. This is where People Near Me (PNM) comes into play. PNM is supposed to search your subnet for peopled who have chosen to broadcast their identity. From this list, you can invite people to join your meeting. It might have been because of mismatching versions of Vista, a weird network connection, or something else, but I was never able to get a list populated with people near me. In any case, you can use the “Invite others…” button to create an invitation file that you email around. Or, people in your network can use Windows Meeting Space to find this meeting, and invite themselves. For example, on my other Windows Vista machine, I’m running Windows Meeting Space and I can see this meeting we’ve just created: Suppose that we were given the password out of band, we’re able to join this meeting. Now we can start some sharing and do our code review. If you press the sharing icon in the meeting, you get a choice of what kinds of applications you want to share. The PNM API supports a way to determine whether the attendees of your meeting have the same application or not. In our case, since we are doing a code review, we’ll share Visual Studio. As the person you initiated the sharing, you see a subtle tool bar in your desktop that says you are doing the sharing. You have the option of stopping your sharing, pausing it and even sending it to a projector. On the other side, for the folks who are attending the meeting, they see Visual Studio embedded into their meeting space window. The image of VS that I have there is really squeezed, it is because I only have 1 monitor, so I have to show both my host VS instance as well as my shared one on the same real estate. Anyone in the meeting can request control of the application and make code changes, comments etc. None of this is necessarily anything new – you could always share with LiveMeeting and Remote Assistance and what not, but somehow Windows Meeting Spaces feels easier to use than all of those technologies. I like that you don’t have to explicitly send out invitations if you don’t want to; you can just search on your network for a meeting to join. Also, I like that there is an API underneath the covers; in theory you could build this type of sharing into right your application. For example, for Visual Studio, maybe you could setup sharing for individual document windows or something? In any case, I thought this might be an interestig feature in Windows Vista, so I thought I would share.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00455.warc.gz
CC-MAIN-2020-50
3,942
17
http://skycoast.us/pscott/archives/000067.html
code
Compress PDF Print Plug-In for Mac OS X Leopard Apple is advertising over 300 new features in OS X Leopard. Unfortunately, there are also some useful Tiger features missing from Leopard. One of those missing—and somewhat esoteric—features is the ability to create a compressed PDF document directly from the Print dialog. To create a compressed PDF document in Leopard—as described in the revamped online help—you must first create a Preview PDF document from the Print dialog, then select the "File->Save As" menu item where you can request the "Reduce File Size" Quartz filter. Unable to accept that two steps are better than one, I've written an Automator workflow that puts the Compress PDF feature back into Leopard. Just install this package to restore the feature to the Print dialog. See the documentation page for more information. Posted by pscott at November 15, 2007 01:57 AM
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189490.1/warc/CC-MAIN-20170322212949-00063-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
895
6
https://howtofix.guide/hackers-attack-pypi-developers/
code
The PyPI administration has warned that hackers are attacking developers as part of a phishing campaign aimed at maintainers of packages published in the repository. The attackers have already compromised hundreds of maintainer accounts and infected numerous packages with malware, including the popular exotel and spam.Let me remind you that we also talked about Malicious Packages from PyPi Arrange DDoS Attacks on Counter-Strike Servers, and also that 10 Malicious PyPI Packages Steal Credentials. PyPI reported that a real “hunt” was announced for developers, after even Django board member Adam Johnson reported receiving a suspicious letter. The email he received urged developers whose packages were published on PyPI to go through a mandatory review process, saying that they risked having the packages removed from PyPI if they didn’t. Johnson said that the phishing site he clicked on from the email looked pretty convincing, but it was hosted on Google Sites, which resulted in an Info button in the bottom left corner. Unfortunately, not everyone was as attentive as Johnson. Some developers fell for the phishers’ bait and entered their credentials on a hacker site, which led to their accounts being taken over and packages infected with malware. PyPI reported that the infected packages included spam (versions 2.0.2 and 4.0.2) and exotel (version 0.1.6). Currently, the malware has already been removed from the repository. According to the PyPI administration, after reports of attacks, a check was carried out, as a result of which “several hundred typesquats” that matched one pattern were identified and removed. Malicious code embedded in compromised packages has been known to pass the user’s computer name to the linkedopports[.]com domain and then download and run a Trojan that makes requests to the same domain. In light of this phishing attack, developers were once again reminded of the importance of two-factor authentication, which has recently become mandatory for maintainers of mission-critical projects. Also, PyPI administrators shared a number of tips to protect against such phishing, including recommending to carefully check page URLs before providing credentials from your account. User Review( votes)
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00373.warc.gz
CC-MAIN-2024-10
2,255
8
https://github.com/kataras/iris/issues/1153
code
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up about ctx.JSON #1153 You are right, in fact, I am working on signing/verification and optional encryption/decryption for Iris' HTTP responses and requests, it will be there on the next version. But you don't have to wait me, currently you can't do that from the
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583690495.59/warc/CC-MAIN-20190120021730-20190120043730-00622.warc.gz
CC-MAIN-2019-04
419
5
https://blogs.msdn.microsoft.com/ejarvi/2006/09/12/the-protest-series/
code
Over the past few years I've been putting together a rough collection of ideas that I use internally for doing software testing, I call it 'protest' as in professional testing. This is a way of taking some of those thoughts and ideas and putting them out there for the software testing community at large to use and abuse or just simply ignore. Standard disclaimer of course applies to all of it. I dedicate this series to the Pacific Northwest Software Quality Conference and the atmosphere of acceptance that conference has always seemed to have - encouraging the sharing of ideas that work while not being afraid to admit you don't know all the answers as we try to "advance our craft." Unfortunately, I can't make it to the conference this year. Ugh. Still, I plan to keep the posts coming, and hopefully I can make it next time around. Wish me luck!
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865830.35/warc/CC-MAIN-20180523215608-20180523235608-00450.warc.gz
CC-MAIN-2018-22
854
2
http://royalfx.ru/lotus-domino-calendar-error-validating-user-13937.htm
code
After realising that a lot of people are ill-informed about Lotus Notes / Domino and its capabilities, I have established this blog (built on Lotus Notes! It compliments legacy applications in Oracle, SAP and other third party applications. I did not have the right to use agents in my mail file so this solution worked. You can use the * for all devices the user use or the Device ID seen in the info dump. Normally a user is using only one device so * should work for you. Name(0) trimmed = Trim(Txt) If ( trimmed = "") Then Msg Box "Please enter some text." source. I agree, that this is probably not the best way of doing this at all. Goto Field("Name") Validate Form= false Else Validate Form= True End If Exit Function e: Msg Box "error at"& Erl() & " and error is "& Error() End Function I have tried using in agent in options using below: Use "Validate" and tried calling it in button using formula @Command([Tools Run Macro]; "Val") But no use I am not getting the desired output. But probably discussing the best way of doing field validation in Lotus Notes is kind of discussing "personal preferences".
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647251.74/warc/CC-MAIN-20180320013620-20180320033620-00243.warc.gz
CC-MAIN-2018-13
1,113
6
http://www.global-flat.com/c/l-old-school-sundays-with-albert-retey-44977.html
code
URL of the article: http://www.flatmattersonline.com/old-school-sundays-with-albert-retey-3 For this weeks OSS we go back to 1994! And the World Championships in Koln, Germany, nowadays the contests run in the legendary Jugendpark! In 94 the worlds went down at the North Brigade Skatepark, check out Albert Retey’s run right here! More from the same site:
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00253.warc.gz
CC-MAIN-2018-30
358
3
https://supermariobros.io/garden-tales
code
Garden Tales is a simple take on the ubiquitous match-three genre. You'll have a blast playing Garden Tales, thanks to its many popular games and brand-new, enhanced levels. The question is, "Why are you waiting?" The game Garden Tales is waiting for you and your friends right now. Players have the option of using the mouse or touching the screen to navigate. The goal of this game is to line up rows of identical food items in either a horizontal or vertical formation. Play through all the levels and visit all the different areas of the map to achieve your goals. In order to earn all three stars, you must use the mouse or touch controls to swap at least three identical meals. If you match four or more, you may also have access to additional bonuses and power-ups. Keep in mind that your actions in each round are counted, and try to complete the task with as few actions as possible for the highest score. I hope you end up being the lucky winner.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00501.warc.gz
CC-MAIN-2023-40
956
4
https://www.haptik.ai/tech/taking-neural-conversation-model-production/
code
Taking Neural Conversation Model To Production Just about 3 years ago, multiple applications which were primarily backed by conventional machine learning modules got on to the wave of optimism, driven by promising results of Deep Learning techniques. One such application was Machine Translation which got significant improvement with an introduction of sequence to sequence learning method. Soon after, using deep learning for chatbots became a promising area of experimentation with neural conversation model becoming a starting point for developers and researchers. With ample amount of conversational data in place and our eagerness to toy around with new technologies, Haptik, one of the largest conversational AI platform became a direct beneficiary of this research. However, it took us multiple iterations to come up with a system that worked for us in production and added significant value to the end user experience. We built a hybrid model which combined neural conversational model with graph-based dialogue system and achieved significant improvements on existing baseline systems. You can also refer our research paper, ‘Production Ready Chatbots: Generate if not Retrieve’ which was presented at AAAI-2018, Deepdial workshop. This post primarily focuses on why we came up with a hybrid approach and how chatbots at Haptik use this approach to respond to user’s messages in a precise and intelligent way. Before we get into the hybrid system, let’s go through some basics. What are primary approaches to build Chatbots? - Retrieval Approach: Retrieval based models use a repository of predefined responses and some rule-based expression match or an ensemble of Machine Learning classifiers to pick the most appropriate response. In this category, one we use a graph-based dialogue system which helps respond to almost 70% of our user’s messages. You can refer to the details of our retrieval approach in section 3 of the research paper. - Generative Approach: Generative models are trained on human-to-human conversational history and build new responses from scratch. In our case, we use seq2seq model with 3 layers bidirectional GRU encoder and unidirectional decoder (again with 3 layers and GRU cells) with attention settings over encoder states. Details of this approach are available in section 4 of our research paper. You can read more about these approaches. Why did Haptik need a Hybrid of Retrieval and Generative Approach? While our retrieval model responded to the majority of user messages, it failed on complex user queries that contained a lot of spelling mistakes, deviation from the domain, code mixed queries, etc. Hence, we decided to dive into experimenting with seq2seq model to harness our historical conversational data. By training around a million conversations, we quickly got a dirty prototype ready, which was a schizophrenic chatbot. This chatbot energetically answered all types of questions, supported every conversation, but at the same time displayed disorganized thinking which was not aligned with improving end-user experience during the conversation. Unfortunately, we couldn’t use it in that shape, because it did not adhere to the following prerequisites of building a successful bot: - Content, tone, and personality of the bot: There was no consistency in language and grammatical constructions used by our seq2seq model; something which is not expected of a good conversational agent. - Accuracy: While it responded to queries, as expected, it also tried responding to unfamiliar intents with unexpected responses. This defeated the entire purpose of responding accurately. - Alignment to a specific task: Good task-specific chatbots tend to keep conversations in a narrow domain and aim to drive it towards task completion. But our model accepted open-ended queries and engaged in endless chatting. Based on the above issues, we concluded that we needed a model which could respond to complex user queries. Also, the objective of these responses should be dedicated to putting the user back on track. And hence, navigate our graph-based dialogue system to exactly where the user left it. Hence, we started engineering a hybrid system which would use a generative model in a controlled fashion. How does a Hybrid System work? The graph-based system works for near-ideal scenarios and takes care of 70% of Haptik’s chatbot conversations. We introduced neural conversation model to respond to the remaining 30% of conversations where the users tend to deviate from ideal chat-flows. We got human agents to respond in real time, with the intent to put users back on an ideal chat-flow. Interestingly, 80% of our training data comprised of these 30% conversations, while the remainder 20% was taken from rest 70% chatbot conversations. You can refer to section 4.2 of the paper for more details on our training data generation and section 5 to understand real-time working. Following is a snapshot of the real-time working of our hybrid system: How did the Hybrid System help Haptik? Haptik processes more than 5 million chatbot conversations on monthly basis. Here’s a list of the advantages of a hybrid model and the snapshots from reminders domain: - We could respond to complex queries which were not handled by our graph dialogue system: - The Hybrid System catered to hinglish (code mixed data generated by mixing Hindi and English) queries and also catered to outliers: - It handled spelling errors, slangs and other chat lingos used on chat by Indian users: One of the biggest advantages of being able to plug in seq2seq model was that the performance of our chatbot system was directly proportional to data. Just like every other system, our model has its own limitations which are mentioned along with results and analysis of our system in section 6 of the research paper. Based on our experience of iterating over and again to achieve a result oriented hybrid model, here is the list of few key checkpoints we would like other developers to consider while building a system which includes usage of the generative model. - Clearly defined goals– As a developer and a deep learning enthusiast, it’s always fun to build generative models because sometimes the model’s behaviour is amazingly cool and brings instant gratification. But needless to say, it is important to narrow down the goal for a generative system so that you can leverage your data efficiently. At Haptik, our primary aim while building a generative system was to keep check when conversations deviate from ideal flows. - Choice of domain– Domain defines the vocabulary which a model will need to understand. And always the lesser, the better. For example – In case you want to build a model for travel ticket booking, you should ideally prefer to train a separate model for flight bookings, train bookings, hotel bookings and cab bookings. - Amount of Data– A ground reality of such generative models is that they need lots of data points to learn. Training data generation is one of the most critical tasks while building any neural conversation model. Getting the considerable amount of Data is difficult but a mandatory step to get measurable results. - Standardisation of Data– Most chatbots are content heavy and messages contain a mix of static and dynamic data. Therefore, metadata throughout the system needs to be logged properly and critical entities should be tagged appropriately at the source. If not done right, one can easily end up in a situation where historical data is not reusable or it needs disproportionate time to clean it for feeding into a generative model. We hope that this blog helps you when you decide to plug in a generative model in your dialogue systems. We keep writing about more use-cases of generative models as we plug them on Haptik as we deploy them on Haptik Here’s a good reading list to begin with for more research on the same lines: - Long Short-Term Memory networks(LSTM) - Sequence to Sequence Learning with Neural Networks - A Neural conversation model - A Persona-Based Neural Conversation Model - Production Ready Chatbots: Generate if not Retrieve - The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems (2015-06) Think we did a good job? Let us know in the comments below. Also, Haptik is hiring. Visit our careers section or get in touch with us at [email protected].
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00403.warc.gz
CC-MAIN-2024-10
8,431
42
https://blog.amit-agarwal.com/2020/07/26/pygmentize-styles/
code
I have recently started using pygmentize for looking at my code in terminal. A very good and native way to do this is to use pygmentize. If you do not know about pygmentize then Highlight the input file and write the result to . If no input file is given, use stdin, if -o is not given, use stdout. So, you can simply pass the script or source code through pygmentize and get a lovely color output with code highlighting in the terminal and this can be very useful. And you can use different style. You can use the following code to see all the available styles. Hope this helps. - 2016/01/05 image ordering by Original Date Time using bash script - 2015/10/15 Get count of lines in scripts (shell) - 2020/04/20 scripting – performance improvement with file open - 2019/12/02 i3 – show mapped hotkeys - 2019/10/10 scan your network with bash IP scan script
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100531.77/warc/CC-MAIN-20231204151108-20231204181108-00223.warc.gz
CC-MAIN-2023-50
860
11
https://mail.python.org/pipermail/python-list/2014-February/666655.html
code
singleton ... again breamoreboy at yahoo.co.uk Wed Feb 12 19:57:09 CET 2014 On 12/02/2014 17:50, Asaf Las wrote: > On Wednesday, February 12, 2014 7:48:51 AM UTC+2, Dave Angel wrote: >> Perhaps if you would state your actual goal, we could judge >> whether this code is an effective way to accomplish > There is no specific goal, i am in process of building pattern knowledge > in python by doing some examples. For more data on python patterns search for python+patterns+Alex+Martelli. He's forgotten more on the subject than many people on this list will ever know :) My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. This email is free from viruses and malware because avast! Antivirus protection is active. More information about the Python-list
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00094.warc.gz
CC-MAIN-2019-35
802
16
https://stac.astrogeology.usgs.gov/docs/tutorials/cli/
code
Discovering and Downloading Data via the Command Line This is an in progress draft example. Please feel free to test, but use with caution! This tutorial focuses on searching for and downloading Analysis Ready Data (ARD) from a dynamic Spatio-Temporal Asset Catalog (STAC) using the command line. At the end of this tutorial, you will have installed the stac-client command line tool, searched for Lunar data, and downloaded data locally for use in whatever analysis environment you prefer to use. Let’s get right to it! In this tutorial, you will learn how to: - Create GeoJSON Region of Interests (ROIs) - Use the STAC API to list available collections - Use the STAC API to search collections for data in the ROI - Download data from the cloud inside of a predefined ROI. This tutorial requires that you have the following tools installed on your computer: stac-client is a python library and command line tool for discovering and downloading satellite data. In this tutorial, only the command line tool will be used. First, we need to get the tool installed. See the installation instructions to get the tool installed. Note the py at the start of the name above. The client is written in python so the module is called pystac-client. When we use the command line tool, the name is stac-client. To confirm that stac-client installed properly, execute: You should see the following output: Congratulations! You have successfully installed stac-client. It is now time to search for analysis ready planetary data. Before we start searching, lets take a moment to talk about GeoJSON. GeoJSON is a standard that is used to encode spatial geometries. All of the STAC items that are available for download include an image footprint or geometry that describes the spatial extent of the data. A common way to discover data is to ask a question like ‘what image(s) intersect with my area of interest (AOI)?’. In order to answer that question, we need to ask it using a polygon encoded as GeoJSON. Since we are working with a command line, we need to do a bit of leg work and encode a GeoJSON polygon. First, let’s make a simple square. To do this, open a text editor (vim, emacs, nano, notepad++, text editor, etc.) and paste the following: This area of interest spans from the prime meridian to 2.5˚ east of the prime meridian (0˚ to 0.5˚) and the equator to 0.5˚ north of the equator (0˚ to 0.5˚). The geometry includes five points because we need to ‘close’ the ring. In other words, the first and last point are identical. Let’s save that GeoJSON into a text file named aoi.geojson (or area of interest). If you are having any issues with the above, definitely run the string through a GeoJSON linter (or checker) like We have officially made it! We have the tools all set up to search for data. Let’s get that first search out of the way immediately. Execute the following: stac-client search https://stac.astrogeology.usgs.gov/api --matched --method GET You should see output that looks like the output below: The number of items found will differ as we add more data, but the general response should be identical. This means that the dynamic planetary analysis ready data catalog contains 57114 stac items when this tutorial was being written. Lets break down the query to understand what exactly is happening here. First, here is the query that we executed: stac-client search https://stac.astrogeology.usgs.gov/api --matched --method GET The first thing we do is tell stac-client that we want to search for data. The other option would be stac-client collections (we will use that shortly). Next, stac-client needs to know the URL to use to be able to access the STAC search service. The USGS hosted STAC server URL is https://stac.astrogeology.usgs.gov/api/. The last argument tells stac-client to limit the number of returned items to 1 and to print the number of matched items. We use the jq tool in the section for pretty printing the GeoJSON responses from the API. jq can be installed just like stac-client was installed, using conda install jq. Now we would like to see what collections are available to search and download data from. To do this, we can use the following command: stac-client collections https://stac.astrogeology.usgs.gov/api or, if you have installed jq for pretty printing: stac-client collections https://stac.astrogeology.usgs.gov/api | jq The output should look similar to the following: At the time of writing, the above command will return six different collections with data targeting the Moon, Mars, and Jupiter’s moon Europa. Each of these collections can be queried independently. Let’s see how many data products are available from the Kaguya/SELENE Terrain Camera. The full dump of collection metadata is a lot to parse and likely not information needed all at once. It would be easier to just get the human readable title and the machine parseable collection id. To do this: `stac-client collections https://stac.astrogeology.usgs.gov/api | jq '. | "\(.title) \(.id)"'` The output should look something similar to the following: This tutorial is using the jq command line JSON tool pretty heavily. While powerful, the jq syntax can be very intimidating! Feel empowered to just copy/paste for now and let us have spent the time getting the syntax right. Once you are more comfortable with the basics of querying the API you could dig more into jq. Alternatively, just print the JSON to the screen or pipe it to a text file and manually scan for the fields of interest. To see how many items (observations) are available within a given collection, it is necessary to tell stac-client which collection to search. We know the names of the collections because they are the id key in the STAC collection. In the example immediately above, the line is "id": "mro_hirise_uncontrolled_observations". Since we are interested in MRO HiRISE data, we will use the following command: stac-client search https://stac.astrogeology.usgs.gov/api/ -c mro_hirise_uncontrolled_observations --matched The response should be: Above, we created a file named aoi.geojson that defines an area of interest. Now we will combine that with a query for the target body we are interested in. Here is the full command: stac-client search https://stac.astrogeology.usgs.gov/api/ --intersects aoi.geojson -c mro_hirise_uncontrolled_observations --save hirise_to_download.json Lets break this command down like we did above: https://stac.astrogeology.usgs.gov/api/- defines the URL to search --intersects aoi.geojsontells stac-client to only search for data that intersects our area of interest (as defined in aoi.json) --save hirise_to_download.jsontells stac-client to save the results to a file named hirise_to_download.json. We will use this file in the next step to download the files found. This command creates a new file on disk, hirise_to_download.json that contains a GeoJSON FeatureCollection with some number of observations in it. We can see what the number is by parsing the file or running the above command replace –save hirise_to_download.json with –matched. (At the time of writing, this command return 4 items.) Since the hirise_to_download.json file is a GeoJSON FeatureCollection, it is possible to load that file into your favorite GIS, to visualize the image footprints, and to see the attributes of the different items. You will not see the data behind the metadata, but we will download the data in the next step. Let’s imagine that the four items found above are ones that we are looking for. In the previous step you executed a query and created a new file named hirise_to_download.json that contains four STAC items. To download the data locally here is a small helper script. This script makes use of jq and wget. You could save this script to the directory you are currently in into a file named download_stac.sh. Then you can download the files that were found by the search using the following: This command will run for a few minutes (on a relatively fast internet connection). At the conclusion of the run, you should have a new directory called hirise_uncontrolled_monoscopic. Inside of that directory, you should see four sub-directories, each containing all of the data for the stac items we discovered previously! The data are organized temporally. The STAC specification is spatio-temporal after all. That’s it! In this tutorial, we have installed the stac-client tool into a conda environment and executed a simple spatial query in order to discover and downloaded STAC data from the USGS hosted analysis ready data (ARD) STAC catalog.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00724.warc.gz
CC-MAIN-2023-40
8,616
62
https://www.travelblog.org/Africa/Tunisia/Monastir/blog-459114.html
code
Edit Blog Post Published: December 8th 2009 In the early days of any teaching contract you rarely have a full timetable of classes. That’s great for us as it gives us the chance to get a few day trips in before work gets too busy and/or stressful!! Sousse, where we are living, is a touristy town but it is not alone on this stretch of Tunisia’s coastline. Nearby the resort of Monastir is probably a little better known amongst European holidaymakers. Getting there is easy for us. We just take a bus into the centre of Sousse and then another bus down to Monastir. In all it’s about 90 minutes travel so not so bad. The problem was that we had just missed the bus from Sousse so we had to chase it down the street until it stopped for us!!! Monastir is famous for a couple of things. The most legendary, certainly amongst fans of Monty Python , is its central ribat where The Life of Brian was filmed. In case you didn’t know, that’s where the line entitling this blog comes from. Aside from the movie connection, the ribat was a very interesting fortification to explore. The views of the city and the coast are wonderful, especially from the top of the so-called Control Tower. From the ribat you also get a marvellous view of Monastir’s other notable sight, the mausoleum of Habib Bourguiba who was the first president of independent Tunisia. It is a striking Islamic construction with two huge minarets and a giant golden dome in the middle. It’s just a shame that the gold leaf is being re-touched at the moment but hopefully that doesn’t detract from the beauty of the place. You can judge for yourself with the panoramic image. Unusually, it is free to go inside and it is a tranquil, cool place to spend a few minutes of contemplation. There are some small displays about the life of the former president too making it a little like a museum. Back outside in the sun we sat and had some traditional street food. Chapatti is something we would normally associate with an Indian restaurant. Here though, it’s a type of bread which is sliced to make a kind of sandwich. We had ours stuffed with spicy sausages, a bit of salad and some of the local sauces. Harissa is a very spicy red sauce made from chilli peppers, more Russ’ thing than Trish’s! The green salade mechouia is a little less spicy but laced with garlic - delicious!! To get back to Sousse we took the Metro train which was quicker and easier than the bus. Had we known about it earlier in the day, we probably wouldn’t have run through the streets chasing after a bus! Tot: 0.402s; Tpl: 0.038s; cc: 7; qc: 32; dbt: 0.0095s; 1; m:saturn w:www (184.108.40.206); sld: 1; ; mem: 1.3mb
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00099.warc.gz
CC-MAIN-2020-40
2,676
19
https://www.cocoacontrols.com/search?q=safe
code
June 13, 2020 • MIT License A percentage type for Swift. June 03, 2020 • MIT License Lightweight generic library to build table and collection views in a declarative type-safe style May 27, 2019 • MIT License 🏷 Type-safe tags in Swift July 10, 2018 • MIT License Protect your users against malware and phishing threats using Google Safe Browsing September 27, 2015 • MIT License A simple, type safe, failure driven mapping library for serializing JSON to models in Swift 2.0. iOS and OSX Foundation Independent (that means it will work without Cocoa when Swift is Open S... August 27, 2015 • MIT License A framework containing a safe wrapper for UnsafeMutablePointer with integer to byte array generics. Is this hosted on GitHub? If this control is hosted on GitHub, paste the address below, otherwise click "Not Hosted on GitHub".
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00260.warc.gz
CC-MAIN-2020-29
846
16
https://amsnbc.com/politico-reporter-on-obtaining-leaked-scotus-draft-opinion-overturning-roe-v-wade/
code
Josh Gerstein, senior legal affairs reporter for Politico, talks with Rachel Maddow about his reporting that he has obtained a draft majority opinion from the Supreme Court that shows the court has voted to overturn abortion rights in the United States. » Subscribe to MSNBC: http://on.msnbc.com/SubscribeTomsnbc About: MSNBC is the premier destination for in-depth analysis of daily headlines, insightful political commentary and informed perspectives. Reaching more than 95 million households worldwide, MSNBC offers a full schedule of live news coverage, political opinions and award-winning documentary programming — 24 hours a day, 7 days a week. Connect with MSNBC Online Visit msnbc.com: http://on.msnbc.com/Readmsnbc Subscribe to MSNBC Newsletter: MSNBC.com/NewslettersYouTube Find MSNBC on Facebook: http://on.msnbc.com/Likemsnbc Follow MSNBC on Twitter: http://on.msnbc.com/Followmsnbc Follow MSNBC on Instagram: http://on.msnbc.com/Instamsnbc #MSNBC #SCOTUS #RoeVWade
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00104.warc.gz
CC-MAIN-2022-27
981
10
http://blogs.technet.com/b/virtualization/archive/2009/04/20/live-migration-and-host-clustering-available-at-no-charge-in-microsoft-hyper_2d00_v-server-2008-r2.aspx?PageIndex=2
code
Information and announcements from Program Managers, Product Managers, Developers and Testers in the Microsoft Virtualization team. I’m Zane Adam, senior director of virtualization and System Center. It’s been a while since my last post, and wanted to update you on our standalone hypervisor, Microsoft Hyper-V Server 2008 R2 Last Fall we released Microsoft Hyper-V Server 2008, a standalone hypervisor-based virtualization product that is available for free. We continue to add more features and value to this product in the upcoming release, Microsoft Hyper-V Server 2008 R2. Our core strategy is to ensure that our customers can virtualize their IT environment in the most cost effective manner, and at the same time, have access to enterprise features like live migration and clustering features for high availability. So in addition to scalability and performance improvements in this version, customers can get live migration and host clustering capabilities and high availability (up to 16 nodes) at no charge. Microsoft Hyper-V Server 2008 R2 will continue to be free, and now will include live migration and host clustering capabilities. Customers won’t need to pay thousands of dollars for alternate virtualization platforms to get these features. With Microsoft Hyper-V Server 2008 R2, customers have a solution for both planned and unplanned downtime and can use it for scenarios like server consolidation, branch server consolidation, high availability, and VDI. These same features also will be available in Windows Server 2008 R2 Hyper-v, which provides our customers with core virtualization features as part of Windows Server offering. With Windows Server 2008 R2, customers also get flexible virtualization rights (e.g., 4 free virtual instances with Windows Server Enterprise Edition and unlimited virtual instances with Windows Server Datacenter Edition). You can download Microsoft Hyper-V Server 2008 R2 beta here. Microsoft Hyper-V Server 2008 R2 can be managed by System Center Virtual Machine Manager 2008 R2. You can download the beta here. You can find more info on Microsoft’s virtualization products and solutions at http://www.microsoft.com/virtualization. Announced today over on the Virtualization Team Blog, Microsoft Hyper-V Server 2008 R2, the “stand-alone” This is an important announcement. Zane Adam, Microsoft Director of Virtualisation and System Center, Microsoft Hyper-V Server 2008 R2 sarà liberamente scaricabile dal web (costo zero) e conterrà le seguenti Na virtualizačním blogu se objevil post, který rozjasní tváře spousty IT odborníků a zodpoví také jednoduché Last Fall we released Microsoft Hyper-V Server 2008, a standalone hypervisor-based virtualization product that is available for free. We continue to add more features and value to this product in the upcoming release, Microsoft Hyper-V Server 2008 R2. En el blog del grupo de Virtualization de Microsoft, anuncian la liberación de la nueva versión de Hyper Today, David Greschler, director of Microsoft virtualization and management, commented on the announcement Pessoal, Essa semana um dos maiores players de mercado em Virtualização (VMWare) fez alguns anúncios When Microsoft Hyper-V Server 2008 shipped last October, it was clear that whilst it was a very performant 237 Microsoft Team blogs searched, 103 blogs have new articles in the past 7 days. 273 new articles found Un mois bien chargé se termine. Chargé en terme d’actualité mais aussi en terme de charge… Cela n’a pas
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468233.50/warc/CC-MAIN-20151124205428-00271-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
3,530
18
http://rosecompiler.org/?p=64?shared=email&msg=fail
code
A draft tutorial of a ROSE-based end-to-end empirical tuning system has been made available online. ROSE is a central component in the SciDAC PERI project to enable performance portability of DOE applications through an empirical optimization system, which incorporates a set of external tools interacting with ROSE to support the entire life-cycle of automated empirical optimization of large scale applications. Please goto Internal ROSE Projects to learn more about the project and download the draft tutorial. Autotuning draft tutorial has been released Posted in News
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863259.12/warc/CC-MAIN-20180619232009-20180620012009-00621.warc.gz
CC-MAIN-2018-26
572
3
https://poststatus.com/ah-the-pre-tag-its-quite/
code
<pre> tag. It’s quite handy to show off code. Chris Coyier has some good tips on how to style that tag. I had no idea there was a tab-size parameter in CSS for styling pre tags. Chris says the default is eight spaces, so it should probably be standard for us to reduce that, especially for mobile. And speaking of tags, if you want some alternatives to inserting line breaks without <br>, CSS Tricks has some suggestions for that too.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817043.36/warc/CC-MAIN-20240416031446-20240416061446-00491.warc.gz
CC-MAIN-2024-18
436
5
https://ess.science.energy.gov/abstract/synthesis-of-hydrological-science-in-the-east-river-from-evapotranspiration-estimates-to-long-term-climate-dynamics/
code
Synthesis of Hydrological Science in the East River from Evapotranspiration Estimates to Long-Term Climate Dynamics Yuxin Wu1* ([email protected]), Bhavna Arora1, Max Berkelhammer2, Rosemary Carroll3, Jiancong Chen4, Chunwei Chou1, Baptiste Dafflon1, Brian Enquist5, Boris Faybishenko1, Cynthia Gerlein-Safdi1,4, Lara Kueppers4, Michelle Newcomer1, Thomas Powell1, Matthias Sprenger1, Tetsu Tokunaga1, Kenneth Williams1, Erica Woodburn1, Eoin Brodie1 1Lawrence Berkeley National Laboratory, Berkeley, CA; 2University of Illinois, Chicago, IL; 3Desert Research Institute, Reno, NV; 4University of California–Berkeley, Berkeley, CA; 5University of Arizona, Tucson, AZ Watershed hydrological behaviors are fundamental to its functions. Quantifying the water fluxes moving across interfaces of compartments of the hydrological cycle is key to understanding and predicting its functions. Evapotranspiration (ET) moves a large quantity of water across the land–atmosphere interface and often contributes most to uncertainties of hydrological behavior. Accurate quantification of ET is a fundamental challenge in predicting watershed hydrological function. Two aspects of this ET research are highlighted: ET uncertainty quantification across different methods and a historic trend of ET behaviors. Two major factors contribute to ET uncertainties and difficulties among different methods: unknown true ET values for benchmarking and differences in meteorological inputs, ET formulations, and parameterization. Understanding sources of uncertainty and developing gold standard benchmarking platforms and datasets are key to accurately predicting ET at watershed scales and beyond. For ET synthesis, a concerted effort was conducted to synthesize ET-related research across the Watershed Function SFA team. Some key progresses from this effort are highlighted, including the development of ET benchmarking platforms and datasets using a controlled lysimeter setup and its use to improve ET model parameterization, as well as the comparison of various ET methods at selected benchmark locations and time periods to identify sources of uncertainties. Results from this effort suggest (1) up to >50% variations of ET quantity across the different methods, particularly in the summer growth season; (2) non-linear and scattered correlations of time-stamped ET across different approaches suggesting fundamental deviations among these approaches, and (3) meteorological forcing (e.g., radiation and wind) is a significant contributor to variations across the approaches. A few key improvement needs are also identified to improve ET quantification at the East River watershed. For historic ET analysis, a statistical time series analysis (1966 to 2021) was applied to 17 locations at the East River watershed. ET was calculated using the Budyko model, Thornthwaite and Hargreaves equations, and the Penman-Monteith equation. The results were used to calculate standard precipitation index and standard precipitation-evapotranspiration index. Analysis suggests shifting of more locations toward water limited scenarios, with these water-energy limitation zonation patterns driven by dynamic climatic processes. This provides a historic context for these observations of changes to ecosystem properties and function.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00146.warc.gz
CC-MAIN-2024-10
3,305
6
https://www.fi.freelancer.com/work/csv-reader-using-php/
code
i need to have a ...general traffic not targeted ADS. There is millions of placements around the world, i need the maximum you can get. If you can do it, you should make the file with the list in .csv file that can be uploaded to my Google Adwords account. Please don't bid if you don't understand what i need, or offering me something else. Thanks -This program needs to be made downloadable on my Windows 8 Windows Surface Pro 4 - Uses the webcam to scan/read both Barcodes and QR codes and import them into a pre-existing file in Microsoft excel - Saves each session by time and date stamps in a text file as a back up Build a platform which integrates UPS and Route 4 me API. Read address from CSV file, when the unique number is scanned, print shipping label. Based on configured zip code, print label for hand delivery/UPS. ...story so that our brand and core values do not get diluted along the way *Must follow the tradition business plan format: A. Executive Summary- Briefly tell your reader what company is and why it will be successful B. Company description- Provide detailed information about your company. Go into detail about the problems your Help with implementing Urban Airship on Android using Google Analytics. I need to have documentation to help the Android developers know how to implement everything. You have to have used UrbanAirship and you must have hooked up Google Analytics. One of our use cases is " If last sleep tracking was 10 days ago: " send a push. The problem is we need i have a two shopify csv files but for some reason they are not uploading in full only 5 products get imported if I try . one has 13 product and one has 50 products. I need an expert who can make it upload all the 63 products. Some one who can act now. Looking to speak with the right bidder who can help me act immediate. Hi, I am looking for a freelancer with great Knowledge in Google API details: Nedd to display the localitation of users on an dashboard on our webiste. We have an existing ios app tied to it wich already tracks the users . BUt we are not able to make the data display on the web side. API is built but needs to be made functionnal and display proper stats as well. Looking for a long t... We are in need of a Quick and Great Proofreader We need you to feel comfortable with make edits and suggestions For the projects from time to time #1 Reports #2 Emails #3 General writing Please let us know about the precious experience you have had We are looking for content writers for ou...looking for content writers for our website for caregivers of senior parents/family members. Need a conversational writing style- "think of writing to a friend". The target reader is primarily female (age 40-65). Will start by hiring writer for 2 articles but we are looking for someone we can hire long-term. I am looking for an Indian who can do some Critique on a piece of content written. The content word count is 652 words I need an Indian as the content is about Indian Lifestyle/Wedding. I would like you to correct as well as comment about the Flair,Grammar and anything else the content lacks in also indicate the positives of the content. If you have some previous work experience on Celebrity/Life... Simple opensource apk in python and kivy. The apk must be able to read: 1- an NFC tag approached to an Android device. 2- a string from a Blue...must be able to read: 1- an NFC tag approached to an Android device. 2- a string from a Bluetooth device associated with an Android device. The string can come from a barcode reader or other Bluetooth device. I have a rough MySQL DB schema designed and need help with writing SQL statements to: 1. Create tables 2. Upload CSV data (thousands of records) 3. Dedupe the uploaded data 4. Merge records based on rules 5. Provide the SQL Statements and help implementing for uploading more data at a later time I have MySQL server running with Workbench on a Looking for someone that could work on a CSV file that contains many lines of product details that require editing. We need someone with experience editing completing various fields on Microsoft Excel. Instructions will be provided. Depending on a turnaround, how fast you will complete certain number of fields and if all correct, this is a one time ...be attached to each items (bedsheet, towel etc) and every time the driver pickup the clothes from client that need to be washed will scan the items with a portable UHF RFID Reader and print a ticket on a portable printer with the items that he picked up. When the items will be returned cleaned to the client the driver scan again the items and print a We are currently looking for content writers for busi...drafted should be crystal clear on the product offerings descriptions, should build confidence, should build a sense of Trust and the content drafting style should keep the reader engaged till the end. If me and my team likes the content that you have written, then there will be more to follow. I need Word (doc,docx), Power point (ppt,pptx), Excel (xls,xlsx), PDF, Html and text Reader. File open within application. Good day, I have the ZCS160 multi-functional reader (google it) and need a software to work with. the device can read magnetic cards read and write RFID's and read and write smart cards / chip cards / EMV cards the program should display the information from the cards in simple text so that i can rewrite them to my needs and then write it back ...am looking for atleast 60 [kirjaudu nähdäksesi URL:n] its a long term project. Articles should be over 2000 words and should use simple english but writing style should touch hearts of the reader. So I guess the ideal writer should be someone who has travelled a lot in srilanka. I will provide the topics for articles then you have to write an article for me. Please
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00154.warc.gz
CC-MAIN-2018-30
5,828
18
https://unix.stackexchange.com/questions/302965/how-to-install-gcc-4-9-arm-cross-compiler-on-debian-stretch
code
Debian "stretch" has built-in cross compilers for gcc versions 5 and 6, but apparently only includes the native architecture in its gcc-4.x packages. Unfortunately I need to compile software that depends on older Linux kernel headers that fail to build with gcc versions later than 4.x (they end up trying to include a file include/compiler-gcc<major-version>.h that doesn't exist for later version numbers), so these are no use to me. I tried using the "embedian" repository, but it only had version branches for "wheezy", "jessie" and "unstable", so assuming "unstable" was an out-of-date reference to stretch I tried that, but I don't seem to be able to persuade it to install anything useful. The error I get is: The following packages have unmet dependencies: gcc-4.9-arm-linux-gnueabihf : Depends: cpp-4.9-arm-linux-gnueabihf (= 4.9.2-16) but it is not going to be installed Depends: libgcc-4.9-dev:armhf (= 4.9.2-16) Depends: libisl13 (>= 0.10) but it is not installable Recommends: libc6-dev:armhf (>= 2.13-5) E: Unable to correct problems, you have held broken packages. I believe the first two failed packages are available to be installed, so I could do those manually if necessary, but the third does not seem to exist anywhere I look. Any suggestions how I can get a working gcc-4.9 (or earlier) cross compiler for arm-linux-gnueabihf on this system?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100626.1/warc/CC-MAIN-20231206230347-20231207020347-00494.warc.gz
CC-MAIN-2023-50
1,363
6
http://ricardohjwiv.look4blog.com/8981406/the-smart-trick-of-homework-writing-service-that-no-one-is-discussing
code
Capstone papers are sophisticated and prolonged assignments. In any circumstance, those on the web services which charge more only lack requirements. Students those who don’t have Significantly time to spend on their homework usually takes advantages of our absolutely free homework assist services. Accessing our absolutely free homework service might help learners to accomplish their homework free of Expense. Our homework tutors are 24*seven on line to provide any assist pertinent to any of the topic. Why do you need assist of this sort of services? The solution to this problem is obvious should you comprehend what a capstone project is. This can be the project that pupils create at the conclusion of the tutorial semester and there is no ought to argue that writing capstone project paper is a really responsible assignment as your last mark is predicated on this function. So, you are able to’t produce a mistake when making it. I used to be impressed with the effects: complete, stable analysis; and excellent writing (over a hugely technical project that way too..). It had been excellent on the main move – no grammatical problems and no edits expected. Major-notch! Today, corporations are making use of a great deal of writers, even unqualified kinds, just to make the most of Determined students. The good news is, you'll be able to swiftly establish them simply because they generally charge minimal charges even for challenging projects. Doing exercises this kind of volume of caution ensures that you don’t get improperly completed perform! How can I do my homework in biology if I don't have any time for it? If This is actually the dilemma you often request you, our service will likely be of good assist. There's no need to stress about the urgent deadlines. We now have plenty of specialists in biology to start working on your paper right away. If you purchase these papers from us, you could rest assured that each order will probably be supplied enough attention by following your Directions. With Having said that, here many of our extraordinary features! Unless of course yours stands out, your email will likely be ruthlessly deleted by no means useful link to get witnessed all over again. You can read the article waste months combing by sample resumes endeavoring to fathom how to grab the recruiters’ focus, but anonymous there's an even better way to get a callback And eventually get to the job interview phase and acquire a career. You merely will need a bit of Skilled resume assistance, and our writers will gladly assist you. Ensure you mention the ways of implementation of options and final results received and review the ways of implantation objectively. Should you’re relatively no cost with picking a topic, dedicate sufficient sources to look as there are plenty of of such projects now. Our experienced staff of ‘produce my homework’ specialists have built us Probably the most trustworthy services nowadays. The fact that former shoppers even now find our ‘do my homework’ services proves that we’re the most fitted custom writing corporation operating today. We benefit your good Check This Out results And do not slack off so that you can rest assured that your assignment will probably be effectively structured, edited and proofread; Compared with cpm homework services which may not constantly give you with free samples, we’ve manufactured guaranteed which you could obtain them prior to buying homework. So, though trying to find an amazing writing service, you must undergo various elements. Great illustrations below include things like getting reliable customized writers and scientists. The main reason behind It's this post because you’ll really need to look at Pretty much every thing within your College or higher education schooling.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.84/warc/CC-MAIN-20181217135323-20181217161323-00407.warc.gz
CC-MAIN-2018-51
3,823
14
https://www.wiznicworld.com/blogs/relationship-between-project-planning-and-scheduling-primavera-construction-scheduling-training/
code
Planning and scheduling go hand in hand. In this section we will analytically compare the two major paradigms of interest such as, planning and scheduling. The main purpose of this comparative study is first to establish the clear-cut distinction between planning and scheduling paradigms. After that we will analyse how these two paradigms are complementary to each other. During this part we are mainly interested in looking at the theoretical aspects of these two domains. A plan is the theory or the details of how something like a project will get done. A plan is used to create a road map for the achievement of an objective. A schedule, when linked to a plan, is assigning times and dates to specific steps of the plan. Planning is the process of identifying all activities necessary to complete the project while scheduling is the process of determining the sequential order of activities, assigning planned duration and determining the start and finish dates of each activity. Planning is a prerequisite to scheduling because there is no way to determine the sequence until they are defined…however they become synonymous because they are performed interactively. A plan requires an outcome and is the road map for getting from the current moment to the future goal. Plans can be short term or long term. They always involve details and specific steps that guide the plan from start to finish. Plans should always make room for unforeseen events; for example if one part of the plan is contingent on an external factor, the plan should account for such an event. Above figure depicts the planning task in its purest form in which planning takes the input from the external world based on some requirements or criteria and generates the sequence of actions so that the need from the external world is satisfied. In the figure the demands can either come from the marketing department describing the need of the market or directly from the customer, demanding the feasible plan that will fulfil the requirements of particular scenario. The main objective is to transfer these demands into rough plan to meet the demands. Then during the actual planning task the flow or sequence of actions is specified, which if carried out the demand in hand can be satisfied. Speaking more formally, it describes the transition from the initial state of the world to the goal state where demands are fulfilled. It was still the time when most of the researchers were treating the problems from the planning and scheduling domains as a solely planning problem and the scheduling task was mainly considered as a sub-domain of planning. The real-life problems needed to achieve the optimal solution over some criteria such as, minimization of cost etc. Although, the pure planning techniques were quite successful for dealing with the long-term time horizon but for the short-term time they have limited applicability. Even today there is a conceptualization about the scheduling task that it is a special case of planning in which the actions are already chosen and leaving only the question of allocating these orders for their assignment. This is an unfortunate trivialization of the scheduling task. As opposed to the planning task the scheduling has found its well-defined boundary line for its definition. The scheduling task can be defined from the various viewpoints such as, operations research, artificial intelligence etc. So before going on talking more about the scheduling let’s consider some of the few definitions that are widely accepted to describe the nature of scheduling task. “Scheduling is the problem of assigning limited resources to tasks over time in order to optimize one or more objectives” “Scheduling deals with the exact allocation of jobs over time, i.e., finding a resources that will process the job and time of processing” “Scheduling deals with the temporal assignment of jobs to the limited resources where a set of constraints has to be regarded” “Scheduling selects among the alternative plans and assigns resources and times for each job so that the assignment obey the temporal restrictions of jobs and the capacity limitations of a set of shared resources” Scheduling is the process of determining times and dates to achieve specific objectives.Schedules, like plans, can be long term or short term. Often, short-term schedules are very important and linked to long-term schedules. When working with an organizational or institutional schedule, it is important that the different players working on a project coordinate their schedules and have access to one another’s schedules to ensure a smooth work flow. As we have seen from above figure that planning often gets influenced by the external environmental factors and produces the partial order of task. These partial order of tasks serves as an input for the scheduling task. It can be seen from the same figure that the planning task is mainly concern with the question of “what should I do?” whereas scheduling mainly deals with the question of “how should I do it?” The following figure depicts the scheduling task in its purest form. Despite the fact that planning and scheduling tasks have their separate existence as a research philosophy, they can be inter-linked with each other and thus forming the cohesive working environment. Such a type of environment is usually referred to as an integrated planning and scheduling environment. The following diagram depicts the integrated planning and scheduling environment. Project Management Office Scheduling is an essential part of Project Management and that needs a lot of monitoring and cross referencing. The whole project depends on the schedule and hence reviewing it makes it fool-proof and hence it ensures the streamlined flow of activities leading to the completion of the project |The planning task mainly deals with WHATactions need to be carried out in order to achieve the final goal-state of the Project.||The scheduling task mainly deals withfinding out WHEN/HOW to carry out the actions to optimize the criteria of the Project.| |It mainly concerns with reasoning theconsequences of acting in order to choose among the set of possible courses of actions. E.g. A plan must consider a possible set of actions available and look for their consequences and choose one action that satisfies most of the requirements.||It is mainly concern with mapping of thevarious sets of tasks to the available resources for the specific time interval while satisfying the constraints. E.g. Assign task A to machine A and task B to machine B. The duration of task A is x min. and that of task B is y min. etc. |Input to the planning task: 1. A set of possible courses of actions. 2. A predictive model for underlying dynamics. 3. A performance measures for evaluating courses of action. Output to the planning task: One or more courses of actions that satisfy the specified set of requirements for performance. E.g. a travelling salesman problem. a) A set of possible courses of action is a set of travel options: air-flights, car, railway etc. b) Dynamics: say the travel dynamics, i.e., information regarding travel time, cost and the way travel time and cost affectseach other. How externalworldconditions affect such actions. For example, weather conditions etc. c) A set of requirements specified by the external world. E.g. in city A on Monday, and Tuesday, in city B from Wednesday afternoon till Friday night. And in this case constraints on solution would be start no earlier than Sunday arrive before Saturday night etc., maximum expenditure should not exceed £1000. |Input to the scheduling task: 1. A set of tasks to be assigned. 2. A set of available resources for the execution of task. 3. The capacity of the available resources. 4. The time intervals of the specified task. 5. The constraints imposed on the task. Output to the scheduling task: The output to the scheduling task is a schedule that assigns the given set of taskson the available resources by maintainingtheir time intervals without violating the constraints. E.g. In job-shop scheduling set of inputs would be set of jobs such as drilling, milling etc. and available resources would be machines on which these jobs can be executed. The constraints can take the nature of no resource is assigned more than one job at a time, each job is completed before he starting of the next job (precedence relation among jobs) etc. |PlanningcanbestatedasSatisfying(find somesolutionssatisfying theconstraints)or finding thefeasiblesetofsolutions transferringtheinitial stateoftheworld into thegoalstate.||Theschedulingtaskisnormallyseenasaoptimisation task over one or more objectivefunctionssuchasminimisationof thecostormaximisation oftheresource utilisationetc.|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138718.61/warc/CC-MAIN-20200712113546-20200712143546-00378.warc.gz
CC-MAIN-2020-29
8,803
36
https://channel9.msdn.com/Forums/Coffeehouse/533570-Three-different-browser-over-three-weeks--the-winner-is-/060d7ff2764a43588ce59deb00da6d9d
code
Interesting I have a mental picture for users of the following: Safari - Fasion Concious, Slightly pretentious and loud, likes to be seen at parties, maybe owns a sports car. Firefox - Young and thrusting types who like to rebel a little (not too much though) Chrome - Wouldn't be seen dead running Safari. Opera - Well, its not your Dads browser, but ..? IE - Sad old geeks who are more concerned with other things( as long as it gets the job done). There go the IT people again. Stop trying to classify people! You will end up with 4 billion classes! I am very pretentious (dont know what it means, but I want!), I am loud (very, even worse when on beer), I am the center of attention at parties (not in a good way), I drive a sportscar (sideskirts, alloys, phat tires, ulta low and hard suspension, turbo diesel!) and use IE 8! So your clasification fails the first unit test! What's up with people trying to identify themselves by the products they buy? Why not rely on your own strength instead of these symbols?
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00219.warc.gz
CC-MAIN-2018-09
1,017
11
http://www.linuxquestions.org/questions/linux-software-2/how-do-you-make-mplayer-with-shuffle-and-loop-0-a-4175424274/
code
Currently I am start my mplayer with the following command: mplayer -playlist playlist -loop 0 and it works fine. However, if I use mplayer -playlist playlist -loop 0 -shuffle My songs will play in a random order, but the loop does not work anymore. The player will stop when it play all the songs from the playlist. Is there anyway to make MPlayer shuffle and also loop forever? Thank you for helping! Syst: CentOS 6.x x86_64
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188926.39/warc/CC-MAIN-20170322212948-00222-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
426
8