url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.scrum.org/forum/scrum-forum/51896/there-any-squad-team-model-which-doesnt-have-tester-or-qa-do-they
code
Is there any squad team model which doesn't have tester or QA? Do they successful? I'm researching a topic talking about the importance of a tester in a squad team, but I wonder if there's any squad doesn't have tester but still works well? And if so, how can they achieve that? Really appreciate any reply on this thread. What is that "squad team" trying to achieve and what commitments, if any, are being made? A Scrum Team will create Done increments of usable quality every Sprint. Scrum Team members recognize that quality assurance is the shared accountability of all Developers, regardless of whether or not any one of them is seen as being a "tester". I'd be highly suspect of a team that doesn't have people who perform testing. However, I don't see a reason that a team needs a dedicated tester on the team. There's definitely value in having test specialists or subject matter experts in the organization, but there are many ways to have such specialists or experts support a team and not all of the ways require having that individual on the team. I have worked with many teams that did not have individuals with QA in their title on the team. In fact, my previous 2 and current employer do not have any QA titles in our organization. However, as noted by @Thomas Owens there are people that test. They are the same people that write code. They create automated tests, they have their peers review the code (including tests) and use that as a part of validating the quality of what is being done. They manually validate if there is a need to do so. Quality is owned by everyone in the Scrum Team and is part of the work that needs to be done in order to satisfy the Definition of Done. Remember that Scrum does not have any job titles listed in the Guide anywhere. There are 3 roles, Product Owner, Scrum Master and Developers. The only plural roles, Developers, is meant to be made up by individuals that have the skills needed to create, maintain and release value increments. How your org decides to do this is up to you. importance of a tester in a squad team In scrum, there is no tester role. Yes you can take out that role. But what you think of quality of what you deliver ? Quality is everybody's responsibility in the team. So we believe testing is one of the ways to ensure quality. We want quality and we want delivery, I'm seeing that with the increasing size of the stack i.e. web development in the cloud, that the list of skills and task a developer needs to have is very very long. I'm finding it difficult to get developers to do non-happy-path testing, this is not unusual. We're finding that devs feel overloaded with tasks, stack is very large, DoD is detailed, so much to do in order to get an item to done. Very few developers enjoy testing and have any formal training in testing. The testing is just enough. Risk here is that the app does not get tested beyond non-happy-path, issues will be found in the wild, increased cost and delays here. I have found in other projects that the use of one or two test experts from the org can help guide the team and even do some testing themselves. Obviously addition on automated tests is optimal, this also takes time and is another task in the long list of tasks for devs. I've coached that there are no subroles in Scrum for years, now I am seeing this causing issues. Yes, there are models where development teams work without dedicated testers or quality assurance (QA) specialists. This approach is often referred to as the "Whole Team" or "DevOps" model. In this model: - Collaborative Responsibility: - Everyone in the development team, including developers, takes collective responsibility for both creating and testing the software. - Continuous Testing: - Testing becomes an integral part of the development process rather than a separate phase. Developers write automated tests alongside their code to ensure its correctness. - Faster Iterations: - Without a distinct QA phase, the development cycle can be faster, allowing for quicker iterations and releases. - Collaborative Problem-Solving: - The team collaborates closely to identify and address issues early in the development process. This often leads to higher-quality software. - Automation Tools: - Automation tools for testing, continuous integration, and continuous delivery become crucial for maintaining the reliability of the software. - Feedback Loops: - Continuous feedback loops are established, ensuring that issues are identified and resolved promptly during the development process. - The success of such a model depends on the team's ability to embrace collaboration and implement effective automated testing practices. - A strong focus on code quality, continuous integration, and proactive issue resolution is essential. - It might be challenging for teams to adopt this model without a cultural shift towards collaboration and shared responsibility. - Automated testing requires time and effort to set up but pays off in the long run. While some teams successfully operate without dedicated testers, the effectiveness of this model depends on the team's skill set, collaboration, and commitment to maintaining high-quality code through automated testing practices.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474737.17/warc/CC-MAIN-20240228143955-20240228173955-00428.warc.gz
CC-MAIN-2024-10
5,227
34
https://adventuresinmommydom.org/shop/
code
Stuff to know as you purchase: - These are digital products, that means you’ll get a link in an email, and there will be a download link on the page after your purchase. - The links do expire after a week. - You get three times to download your product - If you do not get an email in a few minutes check your spam folder. Sometimes it’s finicky and goes to the wrong folder. - Occasionally there will be a problem with your order. I’ll get it worked out as soon as I can. I sadly have not figured out how to clone myself to do multiple things at once, so it may take a bit.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647153.50/warc/CC-MAIN-20180319214457-20180319234457-00483.warc.gz
CC-MAIN-2018-13
580
6
http://www.cl.cam.ac.uk/teaching/1011/CST/node38.html
code
Course material 2010–11 Lecturer: Dr R.J. Gibbens No. of lectures: 12 Prerequisite course: Probability This course is a prerequisite for Computer Graphics and Image Processing (Part IB) and the following Part II courses: Artificial Intelligence II, Bioinformatics, Computer Systems Modelling, Computer Vision, Digital Signal Processing, Information Theory and Coding, Quantum Computing. The aim of this course is to introduce and develop mathematical methods that are key to many modern applications in Computer Science. The course proceeds on two fronts: (i) Fourier methods and their generalizations that lie at the heart of modern digital signal processing, coding and information theory and (ii) probability modelling techniques that allow stochastic systems and algorithms to be described and better understood. The style of the course is necessarily concise but will attempt to blend a mix of theory with examples that glimpse ahead at applications developed in Part II courses. - Fourier methods. Inner product spaces and orthonormal systems. Periodic functions and Fourier series. Results and applications. The Fourier transform and its properties. [3 lectures] - Discrete Fourier methods. The Discrete Fourier transform and related algorithms and applications. [2 lectures] - Wavelets. Introduction to wavelets with computer science applications. [1 lecture] - Inequalities and limit theorems. Bounds on tail probabilities, moment generating functions, notions of convergence, weak and strong laws of large numbers, the central limit theorem, statistical applications, Monte Carlo simulation. [3 lectures] - Markov chains. Discrete-time Markov chains, Chapman-Kolmogorov equations, classifications of states, limiting and stationary behaviour, time-reversible Markov chains. Examples and applications. [3 lectures] At the end of the course students should - understand the fundamental properties of inner product spaces and orthonormal systems; - grasp key properties and uses of Fourier series and transforms, and wavelets; - understand discrete transform techniques and their applications; - understand basic probabilistic inequalities and limit results and be able to apply them to commonly arising models; - be familiar with the fundamental properties and uses of discrete-time Markov chains. * Pinkus, A. & Zafrany, S. (1997). Fourier series and integral transforms. Cambridge University Press. * Ross, S.M. (2002). Probability models for computer science. Harcourt/Academic Press. Mitzenmacher, M. & Upfal, E. (2005). Probability and computing: randomized algorithms and probabilistic analysis. Cambridge University Press. Oppenheim, A.V. & Willsky, A.S. (1997). Signals and systems. Prentice Hall.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00421-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,715
21
https://hireukrainiandevelopers.com/available-developers-for-hire/ruby-developer-andriy/
code
Andriy is a full-stack web developer with more than 7 years of experience and a background in open-source technologies. Andriy has worked with projects of varying size and complexity, starting from small websites to long-term projects with high workloads. Linux advanced user, MySQL, GIT, PHP, Drupal Get Ruby Developer CV Please fill the form below to send your request for downloading CV Other Developers for Hire in Ukraine Oleksandr is an experienced QA engineer with a solid knowledge of testing tools and processes. He has worked in multiple domains, such as gambling, educational CRM platforms, green energy, and some others.View CV Viktor is a highly skilled developer with proven expertise in mobile development. He is oriented to build beautiful and well-functioning applications from scratch. Viktor also has experience in team management.View CV Paul has 5+ years of commercial experience: he has worked on many projects from small startup apps to large enterprise applications. Challenging tasks that he has worked on are migration from jQuery to ReactJS, separating service implementation in the scope of microservices architecture, and separating gem implementation with payment system integration.View CV
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151563.91/warc/CC-MAIN-20210725014052-20210725044052-00240.warc.gz
CC-MAIN-2021-31
1,220
8
https://aidanfinn.com/?paged=3&cat=52
code
There are just 14 days let for you to get off of W2003 and W2003 R2 before patches dry up and security is a theory on your network. Boo hoo. Of course, you understand that I really don’t care for sob stories – I see them more as tragic comedies. You have probably already heard about Windows Insider, a program for providing feedback and shaping the future of Windows on client devices – not that I did not say “Windows 10” because the Insiders program will live beyond the RTM of Windows 10 this summer. - Search for or browse for feedback - Comment on and vote for existing feedback - Submit your own unique ideas Now let’s be realistic – not everything will be done: - Some ideas are daft 🙂 - You’ll find a few things that are already in the TPv2 release of WS2016 - Some things won’t suit Microsoft’s strategy - And some things will take more time than is available – but maybe planning for future releases will be impacted Here’s what I’ve voted for, commented on or submitted so far: - Remember Domain Logins: I find it really annoying that the TPv2 release won’t remember previous domain logons and I have to type my domain\username over and over and over and … - Storage Replica Requirement of Datacenter Edition: Microsoft is planning to only include SR in the Datacenter edition of WS2016. Most of the storage machines I see are physical and licensed with Standard or Storage Server editions. It’ll probably be cheaper to go with 3rd party software than DC edition 🙁 - Storage Spaces advanced tiering: I like the idea of bringing a cloud tier to Windows Server, instead of reserving it in the silly StorSimple appliance. I don’t agree with restricting it to Storage Spaces. - Create a Hyper-V Cluster without AD: Imagine a HYPER-V world (don’t let the SQL heads muddy the waters) without Kerberos!!! Simple SOFS, simple Live Migration, and yes, System Center would need to catch up. - VM Placement Without System Center: Even those who can afford or want to deploy SCVMM often choose not to enable Dynamic Optimization. Let’s bring this feature into Windows Server, where it belongs. - New integrated UI for Hyper-V: Let’s replace Hyper-V Manager, Failover Cluster Manager, and SCVMM with one integrated Hyper-V tool that is a part of Windows Server. The cloud folks can use Azure Stack. SCVMM is broken, and the experience is fragmented. Everyone agrees except fanboys and SCVMM team members. - Change how Hyper-V Manager creates VM folder structure: Sarah, Ben & Taylor – if you fix this, I guarantee a round of applause at the next Ignite. This is the CMD prompt CTRL+V of Hyper-V. This is your opportunity to shape Windows Server. I’ve had that privilege as an MVP – it’s not always immediate but there are headline things in WS2016 that I’ve contributed some feedback for and it feels damned good to see them presented on stage. You can feel that too. If you choose to stay silent, then please stay that way when you’re unhappy. In this post I will show you how to set up a Scale-Out File Server using Windows Server 2016 Storage Spaces Direct (S2D). Note that: - I’m assuming you have done all your networking. Each of my 4 nodes has 4 NICs: 2 for a management NIC team called Management and 2 un-teamed 10 GbE NICs. The two un-teamed NICs will be used for cluster traffic and SMB 3.0 traffic (inter-cluster and from Hyper-V hosts). The un-teamed networks do not have to be routed, and do not need the ability to talk to DCs; they do need to be able to talk to the Hyper-V hosts’ equivalent 2 * storage/clustering rNICs. - You have read my notes from Ignite 2015 - This post is based on WS2016 TPv2 Also note that: - I’m building this using 4 x Hyper-V Generation 2 VMs. In each VM SCSI 0 has just the OS disk and SCSI 1 has 4 x 200 GB data disks. - I cannot virtualize RDMA. Ideally the S2D SOFS is using rNICs. Deploy at least 4 identical storage servers with WS2016. My lab consists of machines that have 4 DAS SAS disks. You can tier storage using SSD or NVMe, and your scalable/slow tier can be SAS or SATA HDD. There can be a max of tiers only: SSD/NVMe and SAS/SATA HDD. Configure the IP addressing of the hosts. Place the two storage/cluster network into two different VLANs/subnets. My nodes are Demo-S2D1, Demo-S2D2, Demo-S2D3, and Demo-S2D4. Install Roles & Features You will need: - File Services - Failover Clustering - Failover Clustering Manager if you plan to manage the machines locally. Here’s the PowerShell to do this: Add-WindowsFeature –Name File-Services, Failover-Clustering –IncludeManagementTools You can use -ComputerName <computer-name> to speed up deployment by doing this remotely. Validate the Cluster It is good practice to do this … so do it. Here’s the PoSH code to validate a new S2D cluster: Create your new cluster You can use the GUI, but it’s a lot quicker to use PowerShell. You are implementing Storage Spaces so DO NOT ADD ELGIBLE DISKS. My cluster will be called Demo-S2DC1 and have an IP of 172.16.1.70. New-Cluster -Name Demo-S2DC1 -Node Demo-S2D1, Demo-S2D2, Demo-S2D3, Demo-S2D4 -NoStorage -StaticAddress 172.16.1.70 There will be a warning that you can ignore: There were issues while creating the clustered role that may prevent it from starting. For more information view the report file below. What about Quorum? You will probably use the default of dynamic quorum. You can either use a cloud witness (a storage account in Azure) or a file share witness, but realistically, Dynamic Quorum with 4 nodes and multiple data copies across nodes (fault domains) should do the trick. Enable Client Communications The two cluster networks in my design will also be used for storage communications with the Hyper-V hosts. Therefore I need to configure these IPs for Client communications: Doing this will also enable each server in the S2D SOFS to register it’s A record of with the cluster/storage NIC IP addresses, and not just the management NIC. Enable Storage Spaces Direct This is not on by default. You enable it using PowerShell: Browsing Around FCM Open up FCM and connect to the cluster. You’ll notice lots of stuff in there now. Note the new Enclosures node, and how each server is listed as an enclosure. You can browse the Storage Spaces eligible disks in each server/enclosure. Creating Virtual Disks and CSVs I then create a pool called Pool1 on the cluster Demo-S2DC1 using PowerShell – this is because there are more options available to me than in the UI: New-StoragePool -StorageSubSystemName Demo-S2DC1.demo.internal -FriendlyName Pool1 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk (Get-StorageSubSystem -Name Demo-S2DC1.demo.internal | Get-PhysicalDisk) Get-StoragePool Pool1 | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal The you create the CSVs that will be used to store file shares in the SOFS. Rules of thumb: - 1 share per CSV - At least 1 CSV per node in the SOFS to optimize flow of data: SMB redirection and redirected IO for mirrored/clustered storage spaces Using this PoSH you will lash out your CSVs in no time: $CSVNumber = “4” $CSVName = “CSV” $CSV = “$CSVName$CSVNumber” New-Volume -StoragePoolFriendlyName Pool1 -FriendlyName $CSV -PhysicalDiskRedundancy 2 -FileSystem CSVFS_REFS –Size 200GB Set-FileIntegrity “C:\ClusterStorage\Volume$CSVNumber” –Enable $false The last line disables ReFS integrity streams to support the storage of Hyper-V VMs on the volumes. You’ll see from the screenshot what my 4 node S2D SOFS looks like, and that I like to rename things: Note how each CSV is load balanced. SMB redirection will redirect Hyper-V hosts to the owner of a CSV when the host is accessing files for a VM that is stored on that CSV. This is done for each VM connection by the host using SMB 3.0, and ensures optimal flow of data with minimized/no redirected IO. There are some warnings from Microsoft about these volumes: - They are likely to become inaccessible on later Technical Preview releases. - Resizing of these volumes is not supported. Oops! This is a technical preview and this should be pure lab work that your willing to lose. Create a Scale-Out File Server The purpose of this post is to create a SOFS from the S2D cluster, with the sole purpose of the cluster being to store Hyper-V VMs that are accessed by Hyper-V hosts via SMB 3.0. If you are building a hyperconverged cluster (not supported by the current TPv2 preview release) then you stop here and proceed no further. Each of the S2D cluster nodes and the cluster account object should be in an OU just for the S2D cluster. Edit the advanced security of the OU and grand the cluster account object create computer object and delete compute object rights. If you don’t do this then the SOFS role will not start after this next step. Next, I am going to create an SOFS role on the S2D cluster, and call it Demo-S2DSOFS1. New-StorageFileServer -StorageSubSystemName Demo-S2DC1.demo.internal -FriendlyName Demo-S2DSOFS1 -HostName Demo-S2DSOFS1Create a -Protocols SMB Create and Permission Shares Create 1 share per CSV. If you need more shares then create more CSVs. Each share needs the following permissions: - Each Hyper-V host - Each Hyper-V cluster - The Hyper-V administrators You can use the following PoSH to create and permission your shares. I name the share folder and share name after the CSV that it is stored on, so simply change the $ShareName variable to create lots of shares, and change the permissions as appropriate. $ShareName = “CSV1” $SharePath = “$RootPath\$ShareName\$ShareName” New-SmbShare -Name $ShareName -Path $SharePath -FullAccess Demo-Host1$, Demo-Host2$, Demo-HVC1$, “Demo\Hyper-V Admins” Set-SmbPathAcl -ShareName $ShareName Create Hyper-V VMs On your hosts/clusters create VMs that store all of their files on the path of the SOFS, e.g. \\Demo-S2DSOFS1\CSV1\VM01, \\Demo-S2DSOFS1\CSV1\VM02, etc. Remember that this is a Preview Release This post was written not long after the release of TPv2: - Expect bugs – I am experiencing at least one bad one by the looks of it - Don’t expect support for a rolling upgrade of this cluster - Bad things probably will happen - Things are subject to change over the next year In this survey I asked: What percentage of your APPLICATION servers run with MinShell or Core UI? Consultants: Please answer with the most common customer scenario. - 0% – All of my servers have a FULL UI - 40-60% – Around half of my servers have MinShell or Core UI - 80-100% – All of my servers have MinShell or Core UI In other words, I wanted to know what was the market penetration like for non-Full UI installations of Windows Server. I had a gut feeling, but I wanted to know for sure. I was worried about survey fatigue, and sure enough we had a drop from the amazing 425 responses of the previous survey. But we did have 242 responses: Once again, we saw a great breakdown from all around the world with the USA representing 25% of the responses. Once again I recognize that the sample is skewed. Anyone, like you, who reads a blog like this, follows influencers on social media, or regularly attends something like a TechNet/Ignite/community IT pro events is not a regular IT pro. You are more educated and are not 100% representative of the wider audience. I suspect that more of you are using non-Full UI options (Hyper-V Server, MinShell or Core) than in the wider market. Here we go: So the vast majority of people are not using any installations of MinShell or Core for their application servers. Nearly 15% have a few Core or MinShell installations and then we get into tiny percentages for the rest of the market. We can see quite clearly, that despite the evangelizing by Microsoft, the market prefers to deploy valuable servers with a UI that allows management and troubleshooting – not to mention support by Microsoft. Is there a regional skewing of the data? Yes, to some extent. The USA (25% of responses) has opted to deploy a Full UI slightly less than the rest of the world: You can see the difference when we compare this to a selection of EU countries including: Great Britain, Germany, Austria, Ireland, The Netherlands, Sweden, Belgium, Denmark, Norway, Slovenia, France and Poland (53% of the survey). FYI, the 4 responses that indicated that 80-100% of application servers were running MInShell or Core UI came from: - USA (2) - Germany (2) I am slightly less hardline with Full VS Core/MinShell when it comes to application servers than I am with Hyper-V hosts. But, I am not in complete agreement with the Microsoft mantra of Core, Core, Core. I know that when it comes to most LOB apps, even large enterprises have loads of those awful single or dual server installations that right-minded admins dislike – if that’s what devs deploy then there’s little we can do about it. And those are exactly the machines that become sacred cows. However, in large scale-out apps where servers can be stateless, I can see the benefits of using Core/MinShell … to a limited extent. To be honest, I think Nano would be better when it eventually makes it to a non-infrastructure role. What do you think? Post your comments below. And we’re back with a follow-up survey. The last time I asked you about your Hyper-V hosts and the results were very interesting. Now I want to know about your Windows Server application servers, be they physical, on VMware, Hyper-V, Azure, AWS, or any other platform. Note: I do not care about any hosts this time – just the application servers that are running Windows Server. Here is the survey: As before, I’ll run the survey for a few days and then post the results. Please share this post with colleagues and on social media so we can get a nice big sample from around the world. Lots of folks that are using Window Server Technical Preview (from October 2014) were facing a ticking time bomb. The preview is set to expire on April 14th (tomorrow). Microsoft released a hotfix that will extend the life of the preview until the next preview is released in May. Lot of folks have reported that this hotfix didn’t fix their issue. According to Microsoft: - If you are running Datacenter edition with a GUI then you need to activate the install with the key from here. - Sometime you will need to run SLMGR /ato to reactivate the installation. Microsoft made two significant announcements yesterday, further innovating their platform for cloud deployments. Last year Microsoft announced a partnership with Docker, a leader in application containerization. The concept is similar to Server App-V, the now deprecated service virtualization solution from Microsoft. Instead of having 1 OS per app, containers allow you to deploy multiple applications per OS. The OS is shared, and sets of binaries and libraries are shared between similar/common apps. Hypervisor versus application containers These containers are can be deployed on a physical machine OS or within the guest OS of a virtual machine. Right now, you can deployed Docker app containers onto Ubuntu VMs in Azure, that are managed from Windows. Why would you do this? Because app containers are FAST to deploy. Mark Russinovich demonstrated a WordPress install being deployed in a second at TechEd last year. That’s incredible! How long does it take you to deploy a VM? File copies are quick enough, especially over SMB 3.0 Direct Access and Multichannel, but the OS specialisation and updates take quite a while, even with enhancements. And Azure is actually quite slow, compared to a modern Hyper-V install, at deploying VMs. Microsoft use the phrase “at the speed of business” when discussing containers. They want devs and devops to be able to deploy applications quickly, without the need to wait for an OS. And it doesn’t hurt, either, that there are fewer OSs to manage, patch, and break. Microsoft also announced, with their partnership with Docker, that Windows Server vNext would offer Windows Server Containers. This is a means of app containers that is native to Windows Server, all manageable via the Microsoft and Docker open source stack. But there is a problem with containers; they share a common OS, and sets of libraries and binaries. Anyone who understands virtualization will know that this creates a vulnerability gateway … a means to a “breakout”. If one application container is successfully compromised then the OS is vulnerable. And that is a nice foothold for any attacker, especially when you are talking about publicly facing containers, such as those that might be in a public cloud. And this is why Microsoft has offered a second container option in Windows Server vNext, based on the security boundaries of their hypervisor, Hyper-V. Windows Server vNext offers Windows Containers and Hyper-V Containers Hyper-V provides secure isolation for running each container, using the security of the hypervisor to create a boundary between each container. How this is accomplished has not been discussed publicly yet. We do know that Hyper-V containers will share the same management as Windows Server containers and that applications will be compatible with both. It’s been a little while since a Microsoft employee leaked some details of Nano Server. There was a lot of speculation about Nano, most of which was wrong. Nano is a result of Microsoft’s, and their customers’, experiences in cloud computing: - Infrastructure and compute - Application hosting Customers in these true cloud scenarios have the need for a smaller operating system and this is what Nano gives them. The OS is beyond Server Core. It’s not just Windows without the UI; it is Windows without the I (interface). There is no logon prompt and no remote desktop. This is a headless server installation option, that requires remote management via: - Desired State Configuration (DSC) – you deploy the OS and it configures itself from a template you host - RSAT (probably) - System Center (probably) Microsoft also removed: - 32 bit support (WOW64) so Nano will run just 64-bit code - MSI meaning that you need a new way to deploy applications … hmm … where did we hear about that very recently *cough* - A number of default Server Core components Nano is a stripped down OS, truly being incapable of doing anything until you add the functionality The intended scenarios for Nano usage are in the cloud: - Hyper-V compute and storage (Scale-Out File Server) - “Born-in-the-cloud” applications, such as Windows Server containers and Hyper-V containers In theory, a stripped down OS should speed up deployment, make install footprints smaller (we need non-OEM SD card installation support, Microsoft), reduce reboot times, reduce patching (pointless if I reboot just once per month), and reduce the number of bugs and zero day vulnerabilities. Nano Server sounds exciting, right? But is it another Server Core? Core was exciting back in W2008. A lot of us tried it, and today, Core is used in a teeny tiny number of installs, despite some folks in Redmond thinking that (a) it’s the best install type and (b) it’s what customers are doing. They were and still are wrong. Core was a failure because: - Admins are not prepared to use it - The need to have on-console access We have the ability add/remove a UI in WS2012 but that system is broken when you do all your updates. Not good. As for troubleshooting, Microsoft says to treat your servers like cattle, not like pets. Hah! How many of you have all your applications running across dozens of load balanced servers? Even big enterprise deploys applications the same way as an SME: on one to a handful of valuable machines that cannot be lost. How can you really troubleshoot headless machines that are having networking issues? On the compute/storage stack, almost every issue I see on Windows Server and Hyper-V is related to failures in certified drivers and firmwares, e.g. Emulex VMQ. Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now. In my opinion, the entry of containers into Windows Server and Hyper-V is a huge deal for larger customers and cloud service providers. This is true innovation. As for Nano, I can see the potential for cloud-scale deployments, but I cannot trust the troubleshooting-incapable installation option until Microsoft gives the OEMs a serous beating around the head and turns off hardware offloads by default. This post is dedicated to the person that refuses to upgrade from Windows Server 2003. I’m not targeting service providers and those who want to upgrade but face continued resistance. But if you are part of the problem, then please feel free to be offended. Please read it before you hurt your tired fingers writing a response. I’m not going to pussy-foot around the issue. I couldn’t give a flying f**k if your delicate little feelings are dented. You are what’s wrong in our industry and I’ll welcome your departure. Yes. You are professionally negligent. You’ve decided to put your customers,stockholders, and bosses at legal risk because you’re lazy. You know that support is ending on July 14th 2015 for Windows Server 2003, Windows Server 2003 R2, SBS 2003 and SBS 2003 R2, but still you plan on not upgrading. Why? You say that it still works? Sure, and so did this: Photo of Windows Server 2003 administrator telling the world that they won’t upgrade You think you’ll still get security fixes? Microsoft is STOPPING support, just like they did for XP. Were you right then? No, because you are an idiot. So you work for some government agency and you’ll reach a deal with Microsoft? On behalf of the tax payers of your state, let me thank you for being a total bollocks – we’ll be paying at least $1 million for year one of support, and that doubles each year. We’ll be landed with more debt because your incompetent work-shy habits. You think third parties like some yellow-pack anti-malware or some dodgy pay-per-fix third party will secure you? Let me give you my professional assessment of that premise: HAHAHAHAAHAHAHAHAH! Maybe other vendors will continue supporting their software on W2003? That’s about as useful as a deity offering extended support for the extracted failed kidney of a donor patient. If Microsoft isn’t supporting W2003, etc, then how exactly is Honest Bob’s Backup going to support it for you? Who are they going to call when there’s a problem that they need assistance on? Are you really that naive? Even regulators recognise that “end of support” is a terminal condition. VISA will be terminating business with anyone still using W2003 as part of the payment operation. You won’t be able to continue PCI compliance. Insurance companies will see that W2003 as a business risk that it outside the scope of the policy. And hackers will have an easy route to attack your network. “Oh poor me – I have an LOB app that can’t be replaced and only runs on W2003”. Well; why don’t you upgrade everything else and isolate the crap out of that service? Allegedly, there is an organ rattling inside that skull of yours so you might want to shake the dust off and engage it! I have zero sympathy for your excuses. I know some of you will protest my comments. Your excuses, not reasons, only highlight your negligence. You’ve had a decade and 4 opportunities to upgrade your server OS. You can switch to OPEX cloud systems (big 3 or local) to minimise costs. You could have up-skilled and deployed services that are included in the cost of licensing WS2012 R2 instead of spending your stockholders or tax payers funds on 3rd party solutions. Yeah, I don’t have many good things to say to you, the objector, because, to be quite honest, there is little good to be said about you as an IT professional. This post was written by Aidan Finn and has no association with my employers, or any other firm I have associated with. If you’re upset, then go cry in a dark room where you won’t annoy anyone else. I, like everyone else, have no idea what Microsoft’s plans are for release dates. And, BTW, I’ve cared less and less about System Center since the 2012 SP1 release, mainly thanks to Microsoft changing the licensing of System Center back then and killing sales completely in my market. I also have no inside information on System Center. I make guesses based on stages of development cycle, news, rumours, and past practices, etc. But I was damned sure that Microsoft was going to RTM Windows Server vNext in Q3 (July-Sept) 2015. I did think that GA was going to be after the GA of Windows 10, allowing the client OS to get some headlines by itself. But looking back, I forgot one thing, which I’ll get to and should have been obvious all along. The news that broke last week (I was on the road) that “Windows Server and System Center” were not going to come out until 2016 really surprised me. I’ve seen some speculation on Twitter that “issues” in Windows Server are delaying the release. That is QUITE a jump in logic. I would remind everyone to take a look at the announcement again … We’d also like to share a little more on what to expect from Windows Server and System Center this year. As we continue to advance the development of these products, we plan to release further previews through the remainder of 2015, with the final release in 2016. … “Windows Server and System Center”. Isn’t it interesting that none of the speculators has mentioned System Center and assumed that “bugs” in Windows Server is the cause of this relatively late release for Windows Server? Only on one occasion in the history of System Center (including previous to the System Center label) has System Center been released at the same time as Windows Server; that was with the last release (2012 R2) and even then, some customers were unhappy that System Center was not on feature parity with Hyper-V and Windows Server (storage and networking) – they still aren’t BTW. Can System Center catch up with Windows Server by Q3 of this year? Hmm …. let’s see how much Microsoft has already announced in the cloud aspects of Windows Server vNext. That’s quite a bit of work accomplished by the Server group, right? I don’t think System Center could catch up in such a tight time frame. Remember, they don’t just have to keep up, but they have to add value. So why can’t Microsoft release Windows Server ahead of System Center like they have done before? There’s three aspects to this: - Promises: Microsoft promised that System Center would be released with Windows Server. They cannot offer free ammunition to rivals and sceptics. - Sales to Cloud/Enterprise: Microsoft account managers do not sell Windows Server. They sell bundles like ECI or CIS to their enterprise customers. The customer is getting Windows Server and System Center. Look at the last quarterly results to see how System Center had double digit growth. That doesn’t come in sales to SMEs – Microsoft killed that market 2 years ago with the SML license. - Upgrades: Customers will not upgrade Windows Server if they manage it using System Center. And remember that some elements like VMM do not support newer versions of Windows Server Hyper-V. My gut is screaming that this delay is nothing to do with Windows Server and everything to do with System Center. But that’s just me … guessing … with a little bit of history influencing my gut. While I am disappointed that I won’t be talking, writing, and presenting on a new release later this year, I guess it means we’ll get a more feature rich, complete, and tested release sometime in 2016. That’s a good thing. Ignite 2015 will still have LOTS of great content on Windows 10, cloud innovations, and best practices for current tech (which most still aren’t using or are barely using), but Ignite 2016 could be quite the event to launch at! Yeah, I’m guessing that the launch of Windows Server and System Center 2016 will be May 2016 🙂 As I blogged last night, Microsoft released the technical preview releases for the Threshold generation of Windows Server and System Center, as well as Windows 10. Maybe by now you’ve started your downloads and begun exploring. Maybe you’d like a little bit of reading to prepare you for what’s to come? Here’s what I could find so far: - What’s New in the Windows Server Technical Preview: The content in this section describes what’s new and changed in Windows Server® Technical Preview. The new features and changes listed here are the ones most likely to have the greatest impact as you work with this release. - Release Notes: Important Issues in the Windows Server Technical Preview: These release notes summarize the most critical issues in the Windows Server® Technical Preview operating system, including ways to avoid or work around the issues, if known. - Release Notes for System Center Technical Preview: These release notes provide information about System Center Technical Preview. To evaluate System Center Technical Preview, you need to be running Windows Server® Technical Preview and Microsoft SQL Server 2014. - Features removed in System Center Technical Preview: The following is a list of features and functionalities in System Center Technical Preview that have been removed from the product in the current release. This list is subject to change in subsequent releases and may not include every removed feature or functionality.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00254.warc.gz
CC-MAIN-2021-43
30,043
186
https://blog.adafruit.com/2015/11/23/as-seen-on-show-and-tell-the-cloud-connected-weather-cloud/
code
As Seen on Show and Tell: The Cloud connected Weather Cloud Two weeks ago on Adafruit’s Show and Tell, Richard Albritton shared his epic Cloud Connected Weather Cloud project. We must say, it looks awesome in action! Thanks to Richard for sharing and writing in about the project as well, Richard writes: What better way to visualize cloud data than with an actual cloud. This project sets out to create a device that will visualize information using light and sound surrounded by a white fluffy cloud. The Weather Cloud connects to Weather.com and pulls the current forecast for your location. Weather conditions are pre-programmed into the cloud that will put on a light and sound performance for each weather condition that comes up. You can use IFTTT to set up the connection to Weather.com as well as a schedule for the cloud to follow so that it is already showing you what to expect for your commute Aside from the default weather stuff, you can manually trigger any of the weather performances you want. It may be raining outside, but the sun is shining and the birds are chirping inside. Fall asleep to the soothing sounds of a rumbling thunderstorm. Other notifications can also be added to the cloud. Use IFTTT to set up the cloud to blink red when you have new email or change from Blue to Green when the Seahawks game is on. Now for the techy stuff, this device is powered by the Adafruit HUZZAH ESP8266 Wi-Fi board along with the SoundFX board and some NeoPixels. Adafruit.io and IFTTT provide the interaction between the user and device. Adafruit HUZZAH ESP8266 Breakout: Add Internet to your next project with an adorable, bite-sized WiFi microcontroller, at a price you like! The ESP8266 processor from Espressif is an 80 MHz microcontroller with a full WiFi front-end (both as client and access point) and TCP/IP stack with DNS support as well. While this chip has been very popular, its also been very difficult to use. Most of the low cost modules are not breadboard friendly, don’t have an onboard 500mA 3.3V regulator or level shifting, and aren’t CE or FCC emitter certified….UNTIL NOW! (read more) Adafruit publishes a wide range of writing and video content, including interviews and reporting on the maker market and the wider technology world. Our standards page is intended as a guide to best practices that Adafruit uses, as well as an outline of the ethical standards Adafruit aspires to. While Adafruit is not an independent journalistic institution, Adafruit strives to be a fair, informative, and positive voice within the community – check it out here: adafruit.com/editorialstandards Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand. Have an amazing project to share? The Electronics Show and Tell is every Wednesday at 7pm ET! To join, head over to YouTube and check out the show’s live chat – we’ll post the link there.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00462.warc.gz
CC-MAIN-2023-50
3,723
18
https://www.iar.com/support/tech-notes/licensing/green-dongle-old-product-on-dual-core--quad-core/
code
Safety-certified tools Tools for Automotive Applications C-STAT Static analysis C-RUN Runtime analysis Debugging and trace probes Build tools for Linux This text is applicable for products that uses Activator M hardware locks. (That is, small "dongles" that connects to a parallel port. This type of dongle has a green plastic housing). The green dongle is not used in the current (supported) products. Green dongles were replaced by other dongles in 2001 (and onwards). The Installer fails to install the driver. It might help to run the installer in a single thread. But even if the installation works in this manner, there is no guarantee that the driver itself will work on a dual-core / quad-core. All product names are trademarks or registered trademarks of their respective owners.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513144.48/warc/CC-MAIN-20210117174558-20210117204558-00649.warc.gz
CC-MAIN-2021-04
788
6
https://crad.ict.ac.cn/cn/article/doi/10.7544/issn1000-1239.2017.20158391?viewType=HTML
code
Link duration prediction is an important standard which determines many performance of network in VANETs. Existing analytical methods about link duration based on mobility of nodes in VANETs have no function to predict link duration between any two nodes in the future, so it is not practical for these methods to predict link duration between two vehicles. We propose a dynamical prediction model which considers the distribution of relative velocity, inter-vehicle distance, traffic density change and traffic light to estimate the expected link duration between any pair of connected vehicles, because these factors change continuously in the process of link connection. By taking into account the relative velocity distribution, the model is able to adjust the principle in real time to adapt variation of vehicle speed. By automatically adjusting computing method of the relative distance between two vehicles, DPLD(dynamically predict link duration) model can automatically adapt to the change of relative distance between two vehicles. Therefore, DPLD model can effectively predict the link duration between the two vehicles. Such model is implemented on each vehicle along with parameters estimation methods of relative velocity distribution, exponential moving average method processes speed exception and considering the impact of the traffic light on link duration. Simulation results show that this model predict link duration for urban scenario has the high accuracy.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00539.warc.gz
CC-MAIN-2024-18
1,480
1
http://singletrackworld.com/forum/topic/cadair-idris-rideable-or-walk-instead
code
I'm going to be staying near Machynlleth this weekend. We are going with a non biking couple so plaining on doing some biking as well as walking Planning to go to CYB one day and climb a mountain another day. Cadair Idris seems like a good bet for walking as its nearby. So should I just walk up or is it worth dragging a bike up? I understand there are three routes up: Llanfihangel Path; Minfford Path and the Pony Path. Which if any are rideable, and are they all "legal". I guess coming from Scotland I could feign ignorance about silly access laws, although I'd rather not if there is a non - cheeky alternative
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00195-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
616
7
https://www.chegg.com/homework-help/questions-and-answers/pdf-problem-f-t-c-e-0075-t-t-greater-equal-0-q13236019
code
The waiting time, t, (in minutes), of an individual at the local Stop'N Go has the following probability density function (pdf): Find the value of C that makes this a legitimate probability density function. Find the cumulative distribution function,. Find the probability that the length of time an individual waits at the Stop'N Go is between 2 and 3 minutes. Find the probability that the length of time an individual waits at the Stop'N Go is less than 2.5 minutes. Find the length time such that 50% of all wait times is greater than that value. Find the mean wait time.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525634.13/warc/CC-MAIN-20190718125048-20190718151048-00347.warc.gz
CC-MAIN-2019-30
575
1
http://docs.autodesk.com/MAP/2010/ENU/AutoCAD%20Map%203D%202010%20User%20Documentation/HTML%20Help/files/WS1a9193826455f5ff73c538a911af8b6901a-7977.htm
code
Use this dialog box to add, activate, merge, or drop versions for a data store to which you are currently connected. When you save or discard a version, all features in the drawing that were queried from that version are removed from the drawing. You cannot undo saving or discarding a version. If an error occurs during a version-management operation, the affected item in the dialog box displays an error indicator. To see the cause of the error, hold your cursor over this indicator. If you create a version and the operation fails, you will see a new version with an error indicator. The version has not really been created. It is a placeholder to display the error. Errors remain visible until you close the dialog box, fix the errors, and redisplay the dialog box. Commit your edits to the selected version. This option is available for child versions only. If you merge the active version, its parent version is activated and then the selected version is merged and removed from the Version tree. Discard the selected version. When you drop a version, all edits saved to that version are discarded. This option is available for child versions only. If you drop the active version, its parent version is activated and then the selected version is dropped and removed from the Version tree.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583831770.96/warc/CC-MAIN-20190122074945-20190122100945-00360.warc.gz
CC-MAIN-2019-04
1,295
4
https://freelanceachievers.com/organizational-culture-and-readiness-assessment-2/
code
Section headings and letters for each section component are required. Responses are addressed in narrative form in relation to that number. Evaluation of the proposal in all sections is based upon the extent to which the depth of content reflects graduate-level critical-thinking skills. 1) Section A: Organizational Culture and Readiness Assessment 2) Section B: Problem Description 3) Section C: Literature Support 4) Section D: Solution Description 5) Section E: Change Model 6) Section F: Implementation Plan 7) Section G: Evaluation of Process Each section (A-G) will be submitted as separate assignments so your instructor can provide feedback (refer to applicable modules for further descriptions of each section). The final paper will consist of the completed project (with revisions to all sections), title page, abstract, reference list, and appendices. Appendices will include a conceptual model for the project, handouts, data and evaluation collection tools, a budget, a timeline, resource lists, and approval forms.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00618.warc.gz
CC-MAIN-2022-40
1,029
10
http://soa.sys-con.com/node/2530258
code
|By Jake Robinson|| |February 8, 2013 12:00 PM EST|| As an Infrastructure-as-a-Service provider, Bluelock sees a lot of migration of applications. Migration is occurring from physical servers to cloud, from private cloud to public cloud and back to private cloud from public cloud. Migration can be tricky and a poor migration strategy can be responsible for costly time delays, data loss and other roadblocks on your way to successfully modernizing your infrastructure. While each scenario is different, I'd like to identify three key best practices that will help your team create a solid, successful plan for migrating your application. Even before you begin to move your application, there's a lot of best practice that goes into choosing which application to migrate to the cloud. Regardless of whether you are migrating that app to a public cloud or a private cloud, you should assess the app for data gravity and connectivity of the application. Best Practice: Understand the Gravity of Your Data Data Gravity is a concept first discussed by Dave McCrory in 2010. It's the idea that data has weight and the bigger the data is, the harder it is to move. The bigger the data, the more things are going to stick to it. McCrory states in his original blog post about Data Gravity, "As data accumulates (builds mass) there is a greater likelihood that additional Services and Applications will be attracted to this data." McCrory goes on to explain that large data can be virtually impossible to move because of latency and throughput issues that develop upon movement. On his website, datagravity.org, McCrory explains that to increase the portability of an application it should have a lower data gravity. When moving tier one applications from a physical datacenter to a private or public cloud, we have to take data gravity into account because it will impact the migration. As you are talking about migrating an application, you can think of the full stack of components as a single VM or a group of VMs that are a vApp (see Figure 1). Think of a VM with an OS. If we were to migrate that entire VM to the public cloud, we're copying anywhere from 8-20 GB of data with that OS for no reason at all as the cloud you're migrating the app to might already have the OS available to it. Rather than transferring the data for the OS, whenever possible use metadata instead to describe what OS you want and the configurations using a template or an image on the public or private cloud side. The same metadata concept can be applied to middleware instances too. What we're left with is our actual data and what the app is. The app is static and static info is easy to move because you can copy it once. There's no need to replicate. The most difficult part of the migration is the data, however. There's no easy way to shrink down the data, so you need to evaluate the weight of the data in the app you're considering migrating. Especially if you're a high transaction company, or if it's a high transaction application, as that would be a lot of data to replicate. The data of the app constitutes 99% of the data gravity of the application. Part of the best practice of understanding the gravity of your application is to understand the ramifications of moving a tier one application with a large amount of data and establish where the best home for that application is. Another aspect that you should evaluate as part of your pre-migration plan is to determine how connected your VM or vApp is to other apps. If you have a lot of applications tightly coupled to the application you want to migrate, the cloud might not be an option for that application, or at least only that application. Best Practice: How Connected Is Your App? Beyond what applications are connected to the app you want to migrate, the important aspect to evaluate is how coupled the application in question is to other applications, and how tight or loose of a couple they are. Does your application have data that other applications need to access quickly? If so, a move all or nothing philosophy is your best option. If you have an application that is tightly coupled to two or three others, you may be able to move them all to the cloud together. Because they are still tightly coupled, you won't experience the latency that would occur if your cloud-hosted application needed to access a physical server to get the data it needs to run. A step beyond identifying how many apps are tied to the application you wish to migrate, work next to identifying which of those applications will be sensitive to latency problems. How sensitive it can be should be a consideration of whether you should migrate the app or not. To be able to check this best practice off your list, be very sure you understand everything your application touches so you won't be surprised later, post-migration. The final part gets down to the nitty gritty... choosing the correct migration strategy. Best Practice: Pick Your Migration Strategy. Your best-fit migration strategy will be a function of the features of the application. Option one is data migration of just the data. This is typically the correct choice for tier 1 and 2 applications. Let's say you are able to migrate your VM or vApp. But, it's constantly changing and if it's a tier one application, we may not be able to afford a lot of downtime. Typically, we'll have to invoke some sort of replication. Replication is an entirely separate subject, but when I think of replication, I think of the size of the data, the rate of change and the bandwidth between our source and target. Without going into too many details of replication, let's assume you use some sort of SQL or MySQL program for database replication. What you've done is set up your new cloud to have this OS provision. You've got a MySQL provision and the two SQLs are talking to each other and replicating the data. Option two for migrating your application is machine replication. This is best for tier 1 and tier 2 applications that can afford some downtime. It involves stack migration. There is less configuring in this scenario, but there is more data migrating. Option two is best if you're moving to an internal private cloud. You will be able to replicate the entire stack because you have plenty of bandwidth to move stuff around. It's important to note the portability of VMware, because VMware allows you to package the entire VM/vApp, the entire stack, into an OVF. The OVF can then be transported anywhere if you're already on a virtualized physical server. Option three involves cold P2V migration. You typically see this for tier 2 and 3 apps that are not already virtualized. The concept involves taking a physical app and virtualizing it. VMware has a VMware converter that does P2V, and it's very easy to go from a physical to a private cloud using P2V. It is, however, an entirely different set of best practices. In option three, there is no replication. Those apps can also be shipped off to a public cloud provider to run in the public cloud after being virtualized. A final path some companies take is to treat it as a Disaster Recovery (DR) scenario. Setting up something to basically do replication from one machine to another. Replicate the entire stack from point a to point b, and then click the failover button. Each application, and migration strategy, is unique, so there is no detailed instruction manual that would work for everyone. The best strategy for some applications may be to stay put, especially if you find that steps one and two of the pre-migration evaluation is closely connected or especially weighty. To truly enjoy the benefits of cloud, you want the right application running that you can leverage to the fullest extent. When planning your migration strategy, ask for help from those who are familiar with similar use cases and plan and evaluate extensively to save yourself a lot of time, money and headaches that come from rushing into a migration without a strategy. SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance... Aug. 29, 2015 11:00 AM EDT Reads: 229 What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed. Aug. 29, 2015 10:00 AM EDT Reads: 332 Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last... Aug. 29, 2015 10:00 AM EDT Reads: 509 Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts.... Aug. 29, 2015 10:00 AM EDT Reads: 118 SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world. Aug. 29, 2015 09:30 AM EDT Reads: 435 SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area. Aug. 29, 2015 09:30 AM EDT Reads: 842 Introducing Containers & Microservices Bootcamp at @CloudExpo Silicon Valley | #Containers #Microservices SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on... Aug. 29, 2015 09:15 AM EDT Reads: 176 The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ... Aug. 29, 2015 08:30 AM EDT Reads: 365 In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ... Aug. 29, 2015 07:45 AM EDT Reads: 556 Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe... Aug. 29, 2015 06:00 AM EDT Reads: 376 Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa... Aug. 29, 2015 05:30 AM EDT Reads: 346 Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support. Aug. 29, 2015 03:00 AM EDT Reads: 479 DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th... Aug. 29, 2015 12:15 AM EDT Reads: 338 It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ... Aug. 28, 2015 12:00 PM EDT Reads: 451 Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ... Aug. 28, 2015 11:00 AM EDT Reads: 181 Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab... Aug. 28, 2015 10:00 AM EDT Reads: 283 SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean... Aug. 28, 2015 09:45 AM EDT Reads: 347 Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac... Aug. 28, 2015 09:45 AM EDT Reads: 188 In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ... Aug. 28, 2015 09:30 AM EDT Reads: 132 The Microservices architectural pattern promises increased DevOps agility and can help enable continuous delivery of software. This session is for developers who are transforming existing applications to cloud-native applications, or creating new microservices style applications. In his session at DevOps Summit, Jim Bugwadia, CEO of Nirmata, will introduce best practices, patterns, challenges, and solutions for the development and operations of microservices style applications. He will discuss ... Aug. 27, 2015 02:15 PM EDT Reads: 509
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.25/warc/CC-MAIN-20150827025424-00056-ip-10-171-96-226.ec2.internal.warc.gz
CC-MAIN-2015-35
18,476
84
https://coderanch.com/t/455916/java/declare-class
code
I think the way in which i write programs is wrong. So i just need to clarify it. I use small alphabets for naming the class and sometimes i even write many classes in the same program(i.e in a single file). This has become a habit for me. I think there may be many more mistakes which i do but i dont even know. So please some one look at the code below and point towards the mistakes. I would be really a great help for me. ... and sometimes i even write many classes in the same program(i.e in a single file). This has become a habit for me. A better way is to put each Java class in a separate file, with a name that's the same as the class name (and the extention .java, ofcourse). Sun's Java compiler even requires this for public classes. Looking at your code, here are some more tips: Make your member variables private, unless a more permissive access level is required. Currently, your member variables all have the default access level. Indent your code consistently, that makes it easier to read. There is a great book about how to write good Java code: Effective Java. Buy it - you won't regret it, and it will make you a better Java developer.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249501174.94/warc/CC-MAIN-20190223122420-20190223144420-00221.warc.gz
CC-MAIN-2019-09
1,157
5
https://community.spiceworks.com/topic/109187-outlook-2000-password-prompt
code
Exchange server 2003, Outlook 2000 (ya, old I know) I have one user with some kind of corrupted relationship between their domain login and their exchange mailbox. Every time she logs in, then runs Outlook, she gets a prompt for user, domain, password, with nothing pre-filled in. This happens immediately. It also happens on a *different machine* with a brand new installation of Outlook 2000. What ties the credentials between the domain and Exchange? How can I reset that? I have tried forcing the user to change password at next login from AD. Then having her close Outlook, log completely out, reboot the machine, log back in. Did not help. I've looked at her exchange-related settings in AD, and nothing is obviously wrong. It used to work just fine, it was triggered by a recent password expiration. She's important and she can't get into her email, please help! Outlook web access is enabled? Have her use that until you figure it out. If not, enable it. I've been on Exchange 2010 for a month or 2 now, so my 03 tricks are getting a little faded. I would go ensure though that her domain user has full rights to the mailbox, as well as the fully associated external account. We are using AD replication. I tried your suggestion, and it filled in the user and domain this time, but still would not accept the new password, or any of the past 3 passwords. There are a couple of events in her logs that might help: From System log: Event Type: Warning Event Source: LSASRV Event Category: SPNEGO (Negotiator) Event ID: 40960 Time: 10:45:37 AM The Security System detected an attempted downgrade attack for server cifs/westsrv2.palcotelecom.com. The failure code from authentication protocol Kerberos was "The user account has been automatically locked because too many invalid logon attempts or password change attempts have been requested. But her account is not showing up as locked in AD, and westsrv2 is not a domain controller, its a file server??? In the App Log, I got this last time I rebooted (but not previously): Event Type: Error Event Source: Userenv Event Category: None Event ID: 1053 Time: 10:43:08 AM User: NT AUTHORITY\SYSTEM Windows cannot determine the user or computer name. (An internal error occurred. ). Group Policy processing aborted. There's nothing suspicious in the App log before that. David: she has all of her mail delivered to a .pst, so she can't use Webmail. She has a lot of stuff in her .pst that she has to refer back to, so only getting new mail wouldn't help her much. We were able to login to her Webmail yesterday, with an old password. I think it was the one before she was forced to change it on Tuesday. Roaming profiles? I know you said you tried on two machines, but see if Windows/AD is saving an old set of credentials, recently happened to one of my users. Control Panel > User Accounts > Advanced Tab > Manage Passwords Anything in that list is a cached password and should be deleted. Información Tech is an IT service provider. Where she is located? as you commented you are using replication, I recommend to you force AD replication in order to refresh them. Well, she might have a rule moving that email to a PST, but it has to be in exchange originally. She also might have her outlook setup to be using POP3 or IMAP, assuming those are enabled. It sounds like they did some odd setup of her account. I would highly recommend having her use Exchange and Outlook the right way, which wouldn't preclude moving email to an archive PST file. Fully associating her account with the mailbox would be the way to go to at least get a start. I'd also make sure she's got a good backup of this PST file, as normally these aren't kept on network drives and also normally not backed up. Also, PSTs can be password protected, is that what you're being prompted for? SteveS: you're a genius!!! We don't have roaming profiles, but somehow her user account was set to cache the mailserver password. Jose & David, all great suggestions, and things I hadn't checked yet, thank you for helping me.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718278.43/warc/CC-MAIN-20161020183838-00201-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
4,041
41
https://www.construct.net/en/forum/construct-classic/construct-classic-discussion-37/iterative-saveplease-35490/page-2
code
Here's my argument for a manual iterative backup feature like the one Somebody is suggesting: 1. It gives you more control over when and why a backup is made. Backups aren't decided by an arbitrary means (in this case, time). This means that you don't have to wait for Construct to decide it's backup time, you can just hit a key and go. 2. It solves the problem of sifting through several .bak#.cap files to find the one you need if you want to revert to a previous version. I think also that the backup numbering needs an upgrade. Currently it saves as bak1, back2, etc. But when listing these by name in Windows you get this: I propose that the numbering scheme pads the number with extra 0s, such as bak001, bak002, so that they are listed in the proper order. Perhaps the number of zeros could be taken from the "Number of backups" field in the preferences. The shortcut key Ctrl-B currently does nothing in Construct, this could create a manual backup. Also, you should be able to create a manual backup without having to turn on the Auto-backup feature.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00162.warc.gz
CC-MAIN-2022-49
1,060
6
https://forum.xda-developers.com/showthread.php?t=1064301
code
This ROM is made for tablets like pov Mobii, Viewpad 10s, Smartsurfer 360 MN10U, that have an internal 3G modem and GPS. Of course it works on similar tablets like the Advent Vega, XVision, ... My main goal is to get a ROM that gives full support to 3G and GPS and be as stable and fast as possible. Ganbarou ROM V0.3 Screenshots, changelog and tips: Ganbarou (頑張ろう = Japanese for "Try your best!!") ROM V0.3 - Added NTFS automount (Again many thanks to the_corvus for the scripts) - Fixed small bug with TPUtility. It crashed due to some wrong libraries. - New kernel - CorvusKernel 0.2 (Many thanks to the_corvus for compiling a stock version of his very fast kernel) - Bluetooth problem solved - Changed file manager from Astro to standard Android file manager (taken from CM7 ROM). - Removed NI keyboard (can be downloaded and installed from here) - Name spelling corrected - Get SetCPU from the market (or here from XDA) and select autodetect on the first start (if you have started earlier, must press menu and press the left button "device"). Then you can move the slide to 1500000 (or whatever speed you want). You can define profiles as well. Seems to work on my ROM. - Download CPU Meter from market. It must detect the freqs and let you use them. Don't change any other parameter, use at your own risk. USB host mode [source]: Press power button, once display get power (you will see the difference), press back button and hold until finish booting. Have fun and give me feedback. And as usual: You use this ROM on your own risk. I am not responsible if you break your device, loose warranty, get into a fight with your wife or loose important company data from your SDcard. THIS ROM RUNS BY DEFAULT ON 1.5GHZ OVERCLOCKING. GET SETCPU OR CPU METER FROM THE MARKET TO REGULATE THE SPEED. PERMANENT OVERCLOCKING MIGHT DAMAGE YOUR TABLET. YOU GOT THIS WARNING HERE AND I DO NOT TAKE ANY RESPONSIBILITY IF YOUR TABLET ENDS UP BURNING.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891705.93/warc/CC-MAIN-20180123012644-20180123032644-00350.warc.gz
CC-MAIN-2018-05
1,949
19
https://www.hash.com/forums/index.php?s=066b6c9373a978af15ca52b41ccbb641&showtopic=36309
code
Just a Wooden Sword Posted 11 October 2009 - 03:29 PM Posted 12 October 2009 - 10:26 AM Only obsessed people ever accomplish anything. - M. Ashton My friends dream of retiring to play WoW full time. I dream of retiring to animate with A:M full time. Design Dynamics and Hit and Run and Eggs, Potatoes, and Bacon Browse and add information to the Hash Wiki! Posted 12 October 2009 - 11:00 AM Posted 12 October 2009 - 03:11 PM Posted 12 October 2009 - 03:16 PM Posted 18 October 2009 - 11:06 AM I was wondering if anyone could point me to any tutorials for projecting fire from a object. thanks again I don't have a tut, but if you take the fire tut in TAoA:M and increase the particle velocity from the emitter you'll be on the way to shooting fire. My tutorials All my most beloved tutorials in one convenient location. Except for the ones I've forgotten about. this is only a ... my gallery of A:M tests 87,848 pushed!: the #1 heavy push on Youtube Big thanks to... Roger (again!), Shelton (it's huge!), NancyGormezano, Roger, cribbidaj, thefreshestever, Tom, Dalemation, Simon Edmondson, thejobe, Rob_T (2 more x), agep (again!), itsjustme, jason1025(+1), dblhelix (+1),markw, Roger (3x!), mouseman (x 2!), Xtaz, agep, Gerry, thefreshestever, dblhelix (twice!), jason1025, Luuk Steitner, PDM, Rob_T and Dhar! Posted 18 October 2009 - 07:21 PM 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593010.88/warc/CC-MAIN-20180722041752-20180722061752-00433.warc.gz
CC-MAIN-2018-30
1,416
20
http://blog.geeksmithology.com/2006/04/20/soa-who-cares
code
…’SOA’ might have meant something once but now it’s just vendor bullshit. So relates Tim Bray in a recent blog post, and I say “hear, hear!” Whether an open SOAP layer over a JMS queue or a RESTful HTTP GET from one Perl CGI script to another, anyone that’s been in the trenches for a few years has done SOA. Yet, the architecture astronauts have struck again, claiming that every system and process can be reduced to a series of “services.” But this abstraction is so high (and leaky) that businesses are lead to believe that unless they spend thousands of dollars on middleware and employ a team of EAI surgeons, they are doomed. I admit that these solutions can be appropriate, but I also smell charlatans with hammers seeing a lot of nails. Eventually ESBs and MOM will go the way of the RDBMS, making the transition from proprietary goldmine for experts to commodity for the masses. Even now there are open source alternatives (like Mule) emerging. So what’s the point? We, as an industry, need to stop pushing architectures to proselytize on behalf of vendors, and start producing solutions based on what our clients actually need. And when the next “paradigm shifting” tool arrives, we don’t hold it high as the new aegis under which we play upon FUD to fill industry coffers. Rather, we relegate the gewgaw to the toolbox — where it belongs.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119995.14/warc/CC-MAIN-20170423031159-00359-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,377
5
https://stackoverflow.com/questions/25733028/windows-task-scheduler-not-running-vbscript-when-set-to-run-if-logged-on-or-not
code
I'm trying to run a .vbs file as a scheduled task through Windows Task Scheduler. Under the 'General' tab, when I select "Run only when user is logged on", the script executes as expected. However, when I select "Run whether user is logged on or not", and enter the appropriate credentials, the task runs at the scheduled time, but the script does not actually run. I've already tried running the script under wscript.exe as well as cscript.exe, but no luck with either. EDIT: Even if I am logged in when the task begins, the script will still not run under the "logged in or out" setting. Additional info: The purpose of this scheduled task is to run before I arrive at work. I've already configured my BIOS to startup at a predetermined time (06:00), and set the Task Scheduler to run at 06:27. I've successfully tested the BIOS startup, as well as the script itself (including using the Task Scheduler to run it). Therefore, the only weak link I can find is the option to "Run whether the user is logged on or not". I'm running Windows 7 Enterprise. Any help would be appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00671.warc.gz
CC-MAIN-2020-50
1,083
6
https://pixelsmithstudios.com/user/orangespice-games
code
This user has not added any information to their profile yet. Mobile 2D games, Puzzle games, Word games, Number games, Card Games, Logic and Puzzle games, Quizzes, Educational and Learning games, Advergames, Info-games and Serious games. WORDFIX Word Game - launched 9 July 2016 SUDOMATIK Mini Killer Sudoku - launched 5 January 2017 Windows - http://www.microsoft.com/store/apps/9nblggh440sx Android - http://play.google.com/store/apps/details?id=com.orangespicegames.sudomatik Facebook - https://apps.facebook.com/1611438005547286 Gameroom - https://www.facebook.com/playongameroom?app_id=1611438005547286 Ornamental Christmas - Memory Game - launched 3 January 2017 Short-term insurance industry advergame - Not available yet - client pending.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00220.warc.gz
CC-MAIN-2019-35
746
10
http://www.lucasforums.com/showthread.php?t=66972
code
You can extract images, sound and other things with SCUMM Revisited and then you can edit them. I'm sorry but I don't know any way of re-inserting resources into LucasArt's data files to show the modified things into the game. As I explained to others in this forum , in-game texts are editable, but you can't put longer strings than original ones.
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257831770.41/warc/CC-MAIN-20160723071031-00294-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
348
4
https://community.grafana.com/t/grafana-elasticsearch-alert-issue/51432
code
i got some issues with elasticsearch and grafana alerting on a plain count. I have panel on grafana filtered by es query. The graph looks good so far. however the alert wont work properly. At first i set an alert rule with count(A,5m,now)… etc. As this lead to some strange results, i googeled a little bit and as far as i know it does a count of a count (what is wrong). So i switched the alert rule to max(), last() etc… but i always get a 0 result… even if set the time range extremly fat like in the picture below. It would be nice if any can explain me that behavior as i dont know whats going on or how to fix this. Thank you in advanced
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00751.warc.gz
CC-MAIN-2022-40
649
6
https://docs.moodle.org/dev/Random_question_type
code
Random question type The random question is not a real question but only a device to randomly choose a question from a category. The $question->questiontext field is abused as a flag: 1 means choose question from the category and its subcategories, 0 means only use questions in the category itself. When a new session is started for a random question then a question is chosen from the category. We need to make sure that no question is used more than once in the quiz. Therefore the following need to be excluded: - All questions that are explicitly assigned to the quiz - All questions that are already chosen by an other random question - Random questions - Other explicitly excluded question types - Wrapped questions To do the first the question type uses an additional property $quiz->questionsinuse that holds a comma separated list of all questions used in the current quiz. To do the second the questiontype class has a property 'catrandoms' which is an array indexed by category id and by $question->questiontext. Each entry is a randomized array of questions in that category which can be used. Explicitly excluded question types for the list of explicitly excluded types: var $excludedtypes = array("'random'", "'randomsamatch'", "'essay'", "'description'"); The answer field for random questions come in two flavours: - For responses stored by Moodle version 1.5 and later the answer field has the pattern random#-* where the # part is the numeric question id of the actual question shown in the quiz attempt and * represents the student response to that actual question. - For responses stored by older Moodle versions - the answer field is simply the question id of the actual question. The student response to the actual question is stored in a separate record. This means that prior to Moodle version 1.5, random questions needed two records for storing the response to a single question. From version 1.5 and later the question type random works like all the other question types in that it now only needs one record per question. The random question type does not have any options for $question->options. This has one property 'question' which is set to the fully instantiated question object for the randomly chosen question.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00567.warc.gz
CC-MAIN-2022-40
2,246
18
https://forums.openqnx.com/t/topic/20526
code
I recently downloaded the QNX 6.2 NC ISO, made a self-booting CD and installed QNX 6.2 to the first 6 Gig partition on an IDE drive. QNX runs very well - DHCP all OK and internet connection (via cable) all OK. However the Mozilla Ver 0.9.8 included with the download wont run. No error messages, no crashes - just nothing. Tried downloadng the 1.0 Mozilla from the QNX website - it appears to install OK but Mozilla still won’t run. One odd thing after installing either version of Mozilla is that the icons in Launch - Internet - Mozilla are square blue boxes - probably not the actual Mozilla icons. Have tried numerous re-instals of the operating system but no joy. Can’t get Mozilla to run fro a pterm either. Checked the QNX Knowledge Base and found two articles about Mozilla not running - but they were from 2000 and pertain to QNX 6.0.0. I did try setting the environment variable PHIG=1 as suggested in one of these articles and setting the ‘LD_LIBRARY_PATH’ to where Mozilla is installed as suggested in the other article. After doing this I still can’t run Mozilla from the Launch bar but if I then try to run Mozilla from a pterm I get a brief display of the Mozilla opening splash screnn - then nothing more. No crash or system hangup but no Mozilla is running. I have been monitoring this newgroup now for a few weeks and have read allthe back messages but haven’t seen any threads on this particular problem. So I’m stumped. Anyone got any ideas what I’m doing wrong?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.28/warc/CC-MAIN-20240418093630-20240418123630-00693.warc.gz
CC-MAIN-2024-18
1,498
23
https://blogs.oracle.com/ebsandoraclecloud/leveraging-application-management-suite-to-compare-ebs-configurations-in-oracle-cloud
code
[Guest Author: Vasu Rao] Oracle Application Management Suite provides central configuration management capabilities that you can leverage to compare the configurations of your on-premises and cloud-based E-Business Suite environments. With these capabilities, you can: Oracle Application Management Suite for Oracle E-Business Suite collects EBS technology stack configuration information at regular intervals and inserts this information into the Oracle Enterprise Manager Management Repository. Oracle Enterprise Manager acts as a central repository of all configuration information, which enables comparison of environments on premises and on Oracle Cloud. You can compare your configurations using time-based configuration snapshots, or you can schedule configuration comparisons between multiple Oracle E-Business Suite instances. You control the information-collection frequency, which is set to every 24 hours by default. Oracle Application Management Suite includes a set of comparison templates that allow you to selectively compare certain configurations, as shown in the image below. Sample Comparison of Key Site-Level Profile Options
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00102.warc.gz
CC-MAIN-2020-29
1,146
5
https://www.tw200forum.com/forum/general-discussion/2784-tw200-key-cheap.html
code
You can post pics in the TEST forum with the download option (meaning, you don't need to host them on photobucket, snapfish, etc.), then either share the hyperlink to that test forum post or copy the image URL to this post. You can't post photos in any forum, except the TEST forum. Not sure why no one has taken advantage of that (unless they delete data over xx days old??).
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00433.warc.gz
CC-MAIN-2019-47
376
1
http://diy.stackexchange.com/questions/tagged/preparation+paint
code
Home Improvement Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site How do I prepare an interior cement floor for painting? I have an enclosed carport that serves as 4th bedroom/den in a 30 year old house. This room has long been used as storage, etc, and is in fairly bad shape. I'm rehabbing it to serve as a home office, ... Oct 8 '11 at 19:46 newest preparation paint questions feed Hot Network Questions Can I use vodka to clean my keyboard? Subgroups from which all class functions extend to class functions on the ambient group Why did this explosion make me fat? (A land mine increased my weight) Drinking alcohol after work how to fix a wrong pgfplots plot? Audio track complete What is the largest ship in the Star Trek Universe? Why did Microsoft choose the word "Recycle Bin"? Get running status of mongodb What should I buy to get in a D&D crowd? SED : Case Match Pattern Replacement Kid's homework: 4 equations 5 unknowns? Going crazy! What should be included in a freshman 'Mathematics for computer programmers' course? What makes it illegal to use the information learned by exploiting a bug? Cheaper FPGA for fixed designs? From which anime or manga is this character? Help!! How to do square root! Can Python 2 be safely removed from Ubuntu 14.04? What do those mean for cheque purposes? How to explain Heartbleed without technical terms? Plausible explanation for large number of armed adventurers in fantasy RPG setting? Making sense of 5/4 time signatures Draw the shadows of buildings Why are sine/cosine always used to describe oscillations? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
2,156
53
https://pubs.lenovo.com/st550/setup_install_a_hot-swap_drive
code
Install a hot-swap drive Use this information to install a hot-swap drive. The following notes describe the type of drives that your server supports and other information that you must consider when you install a drive. - Depending on your server models, your server supports the following drive types: For a complete list of supported optional devices for the server, see:Lenovo ServerProven website The drive bays are numbered to indicate the installation order (starting from number “0”). Follow the installation order when you install a drive. See Front view. - You can mix drives of different types, different sizes, and different capacities in one system, but not in one RAID array. The following order is recommended when installing the drives: Drive type priority: NVMe SSD, SAS SSD, SATA SSD, SAS HDD, SATA HDD Drive size priority: 2.5 inch, 3.5 inch Drive capacity priority: the lowest capacity first The drives in a single RAID array must be the same type, same size, and same capacity. - If the drive bay has a drive filler installed, remove it. Keep the drive filler in a safe place for future use.Figure 1. Drive filler removal Touch the static-protective package that contains the new hot-swap drive to any unpainted surface on the outside of the server. Then, take the new hot-swap drive out of the package and place it on a static-protective surface. To install a hot-swap drive, complete the following steps: - A video of this procedure is available at YouTube - Slide the release latch to open the tray handle. Then, slide the drive into the drive bay until it snaps into position. - Close the tray handle to lock the drive in place.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474643.29/warc/CC-MAIN-20240225203035-20240225233035-00583.warc.gz
CC-MAIN-2024-10
1,656
17
http://windowsitpro.com/sql-server/download-latest-sql-server-70-sp4-security-update
code
Microsoft has provided an updated release of its security update for SQL Server 7.0 Service Pack 4 (SP4). The article "INF: SQL Server 7.0 Security Update for Service Pack 4" (Q327068, http://support.microsoft.com ) says that the security update now includes a previously reported fix that prevents an attacker from running existing Web tasks in the context of the creator of the Web task or inserting his own Web tasks into the SQL Server system. Semester 1: October 30th to December 4th Semester 2: January 22nd to February 19th John Savill will cover topics including: * Deploying, Managing, and Maintaining Windows * Key Features of Active Directory from Windows 2000 to Windows Server 2012 * Key elements of System Center 2012 and System Center 2012 R2 * Deploying, Migrating to and Managing Hyper-V in Your Organization * Implementing a Private Cloud * Using PowerShell to Automate Common Tasks
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444582.16/warc/CC-MAIN-20141017005724-00031-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
900
5
https://lists.cabforum.org/pipermail/public/2013-October/033860.html
code
[cabfpub] Ballot 109 - Create SSL Performance Working Group gerv at mozilla.org Tue Oct 15 09:17:52 UTC 2013 On 15/10/13 05:34, Ben Wilson wrote: > Ballot 109 – Create SSL Performance Working Group Two instances of the same feedback makes a quorum; I'd like to modify my ballot to add a reference to security. I have used the word "acceptable" because to say anything more specific would be to prejudge the discussions of the group. If members feel the resulting advice does not lead to acceptable security, they would be free to vote against adopting the documents. I don't want to say "best", because e.g. 4096 is, in some small way, 'more secure' than 2048 but almost certainly not enough so for this document to advise using slower > Scope: the Working Group shall consider all matters having a bearing on the performance of > software deployments which use SSL and the Web PKI. Examples might > include: certificate contents, choice of proposed ciphers, webserver > configuration, and OCSP configuration. > Deliverables: the Working Group shall produce one or more documents > giving best practice guidance for getting the best performance from a > SSL deployment which uses the Web PKI , while still providing acceptable levels of security. More information about the Public
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00698.warc.gz
CC-MAIN-2022-40
1,282
23
https://coursemarks.com/course/single-page-application-with-asp-net-jquery-hands-on/
code
The single-page app you will build in this course is a shopping list application that uses every CRUD operation with HTTP requests, calling a RESTful web service – using ASP.NET Web API 2 – which saves your data persistently in a SQL Server database. Make yourself ready to learn some jQuery, HTML and CSS for the front end. And for the back end you will use Microsoft’s ASP.NET Web API 2 for the RESTful web service and Entity Framework with Code First Migrations to communicate with the database. On top of that you will learn how to publish your single-page app to Internet Information Services (IIS) so that everyone can access your new single-page application. Patrick, the author of this course, has built several web applications professionally as freelancer and employee and over the years he learned many things that you just don’t have to do to succeed in building a single-page application. This course will save you time, because you will learn the crucial and most important parts quick, so that you can get your single-page app out there in no time! Sound’s good? Let’s get started! What kind of single-page application will be built? During this course you will learn how to build a complete single-page application by building a simple shopping list web application – an app that comes in quite handy for almost everybody. In this web application the user will start by creating a new shopping list. After that she will be able to add items to her list, check them off and delete them. If the user wants to access a certain shopping list, she can do so by adding the id of the list in the URL – which will be delivered by your web app, of course. That way the user is able to create the list at her computer and open it afterwards with her smartphone when she is actually in the grocery store. What technology is used for the front end? There are so many frameworks out there that you simply don’t need or are just too big to start learning how to build single-page applications. In this course you will learn the basics that you will also need to know when you want to understand how frameworks like Angular work. Because when you start with Angular for example, you might get results sooner or later, but maybe you won’t know what actually happens under the hood. In this hands-on course you will learn and understand the essence of single-page applications by using the following technologies: •HTML – You will build the application like any other website with plain old HTML. •CSS – To change the appearance of the application you will use a little cascading style sheets. •Ajax – With the help of jQuery and Ajax you will make the actual calls to the web service which returns data from the database. What technology is used for the back end? The back end or server side will be implemented with .NET technologies. You will need a RESTful web service you will call from the front end, a framework that maps your C# models or classes to database tables and of course a database. The following technologies will be used for that matter: •ASP.NET Web API 2 – It’s the state-of-the-art framework that helps you build HTTP services easily. With Web API 2 you will build a RESTful web service that enables the front end (or any other client you want to reach in the future) to make all CRUD (create, read, update, delete) operations by using GET, POST, PUT and DELETE HTTP requests. •Entity Framework – An object-relational mapping (ORM) framework that allows you to map your C# models with actual database tables. This part is crucial to save your data persistently. •SQL Server – At first the database you will use in this course is a file that will be generated by Visual Studio. But later on, especially when you want to publish your app to IIS and make it available to the world, you will use a SQL Server database. So far for the server-side. Don’t worry, every technology is available for free! What tools do I need? The entire course uses the Microsoft stack to develop the single-page application – apart from the browser, which is Google Chrome. The following tools will be used and are totally free: •Visual Studio 2017 Community Edition – Most of the time you will develop the application in Visual Studio. It might help if you already know this IDE. Older versions of Visual Studio also work. •SQL Server Express Edition – This will be your database. The Express Edition is available for free and absolutely suits your needs. •SQL Server Management Studio – This application is perfect to manage your database. Don’t worry, you will learn how to use it step by step in this course. •Google Chrome – As mentioned above, during this course Google Chrome and its developer tools will be used to access the web application. But any other browser with developer tools available will also do the trick. This means you can also use Firefox, of course. Even Internet Explorer would work… but honestly, it’s not recommended. •Internet Information Services (IIS) – Not really a tool for developing the application but for publishing it. If you have no access to IIS you can still follow the steps of publishing and use the results later on with a Microsoft Hyper-V Server for free! Everything is taught in the lectures. Why should I pay for this course although there are so many free tutorials available? A good question! Indeed there are lots and lots of tutorials available online that might get you the information you are looking for. The advantage of this course is that you will get this one big package out-of-the-box. You will see every single step from start to finish on how to build your single-page application. Starting from the front end, then building the perfectly fitting solution for the back end and even publishing it on a server. You can’t miss anything, because you’re able to watch the whole development process. And if something is still unclear, you can always ask a question in the forums. And if you are still not happy you can get your money back – no questions asked.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.2/warc/CC-MAIN-20231205140836-20231205170836-00609.warc.gz
CC-MAIN-2023-50
6,097
28
https://www.estos.com/products/ecsta-series
code
ECSTA 5 series The ECSTA series as middleware enables communication between your telephone system and the Microsoft Windows world. By implementing the system protocol CSTA in the Microsoft TAPI standard, numerous added values for improved communication arise. Thanks to the middleware products of the ECSTA series, communication between the telephone system and the IT infrastructure is made possible. As a result, telephone systems and connected phones can be controlled easily from your PC. The ECSTA series enables a reliable connection of your telephone system to your IT world. With this connection, e.g. Unified Communication software with a CRM, ERP or ticket system, communication processes in the company can be usefully linked and made more efficient. For satisfied customers and a successful service. ProCall Enterprise is the Unified Communications Collaboration and CTI software for small and medium-sized businesses. With ProCall Enterprise, you can improve communication and daily work processes in your company. With the ECSTA series, you can create a perfect connection to your telephone system and thus benefit from business process integration and Comfort CTI. The CallControl Gateway brings classic telephone systems and Microsoft Lync / OCS together. In this way, you have the option to expand Microsoft Lync with classic telephony functions. For example, telephone calls can be made or received directly from your Lync client. The ECSTA series thereby establishes the connection between the world of your telephone system and the Microsoft IT world.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256812.65/warc/CC-MAIN-20190522123236-20190522145236-00340.warc.gz
CC-MAIN-2019-22
1,571
6
https://mistserver.org/download
code
You can download the latest versions of MistServer open source here. For MistServer Pro downloads please go to My downloads after logging in. The Raspberry Pi images are an image of arch linux and work on both Raspberry Pi 2 and 3. The standard login is alarm/alarm and root/root. We recommend changing these to your own preference on your first boot. You can immidiately start using MistServer by connecting to the MistServer interface. If you'd like to compile MistServer open source for yourself you're welcome to do so. You can find the source on our Github page.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00631-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
567
2
https://www.geeksforgeeks.org/divide-a-string-in-n-equal-parts/
code
Difficulty Level: Rookie Write a program to print N equal parts of a given string. 1) Get the size of the string using string function strlen() (present in string.h) 2) Get size of a part. part_size = string_length/n 3) Loop through the input string. In loop, if index becomes multiple of part_size then put a part separator(“\n”) a_simpl e_divid e_strin g_quest In above solution, n equal parts of the string are only printed. If we want individual parts to be stored then we need to allocate part_size + 1 memory for all N parts (1 extra for string termination character ‘\0’), and store the addresses of the parts in an array of character pointers. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Don’t stop now and take your learning to the next level. Learn all the important concepts of Data Structures and Algorithms with the help of the most trusted course: DSA Self Paced. Become industry ready at a student-friendly price. - Return maximum occurring character in an input string - Print all the duplicates in the input string - Remove characters from the first string which are present in the second string - Remove duplicates from a given string - Print reverse of a string using recursion - Write a program to print all permutations of a given string - Given a string, find its first non-repeating character - Write a program to reverse an array or string - Reverse words in a given string - Find the smallest window in a string containing all characters of another string - Lexicographic rank of a string - An in-place algorithm for String Transformation - Karatsuba algorithm for fast multiplication using Divide and Conquer algorithm - Count words in a given string - String matching where one string contains wildcard characters - Remove "b" and "ac" from a given string - Find if a string is interleaved of two other strings | DP-33 - Print all ways to break a string in bracket form - Maximum length prefix of one string that occurs as subsequence in another - Rearrange a string so that all same characters become d distance away
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00144.warc.gz
CC-MAIN-2020-29
2,135
30
http://news.squeak.org/category/smalltalk/page/2/
code
25 February, 2013 Bert Freudenberg announced new Smalltalk Bindings for Minecraft Pi. See his blog post here. 20 September, 2011 Years ago Google started the developing of V8, a super Just in time compiler for its Chrome Browser. return “Nevermind, I am a function, now”; The code above builds a very generic object, and add to it properties (which could be a function). Worst, you get very strange things, evaluting: I like dynamic languages like Self, but this is somewhat too…flexible :) 27 June, 2011 Worried that you won’t be able to come to Edinburgh for ESUG 2011? Well how about joining your fellow Smalltalkers in Argentina in November for Smalltalks 2011? The Smalltalks conference brings together more than 200 people from both academia and industry to discuss Smalltalk-based software over three days. Smalltalks conferences have included many high-quality presentations from industry and research, showing interesting applications of Smalltalk, advances in the Smalltalk language, didactic uses of Smalltalk and much more. As in previous years, there will be a dedicated research track for original scientific contributions to, or using, Smalltalk in general. If you’re interested in submitting a paper for the conference, the hard deadline is 22nd August 2011, with notification of acceptance by 23rd September. See the call for papers for more details of submission guidelines and criteria. 5 November, 2010 Stefan Marr has just announced on his blog the relase of RoarVM, the first single-image manycore virtual machine for Smalltalk. RoarVM is based on the work on Renaissance VM by David Ungar and Sam S. Adams at IBM Research, and was ported to x86 architecture by Stefan. From his post: “The RoarVM supports the parallel execution of Smalltalk programs on x86 compatible multicore systems and Tilera TILE64-based manycore systems. It is tested with standard Squeak 4.1 closure-enabled images, and with a stripped down version of a MVC-based Squeak 3.7 image.” Support for Pharo 1.2 is currently limited to 1 core, but this is being worked on! Here’s some indicative figures for this new VM (using an adapted version of tinyBenchmarks on an MVC image): 1 core 66M bytecodes/sec; 3M sends/sec 8 cores 470M bytecodes/sec; 20M sends/sec As Stefan notes “The RoarVM is a research project and is not as optimized for performance as the standard Squeak VM”. For comparison: Squeak 4.2.4beta1U, MVC image, OS X 555M bytecodes/sec; 12M sends/sec so you’ll need a few cores active before you start to see improvements over your existing image! There are also a number of known issues with the current implementation. 5 October, 2010 Lambda the Ultimate is celebrating 10 years of its own existence, 30 (nominal) years of Smalltalk-80 and PARC turning 40, by revisiting a classic article Design Principles Behind Smalltalk by Dan Ingalls. From the post: “Ingalls’s piece should be filed under Visionary Languages. Alas, no such category exists on LtU.” Does this mean that Smalltalk-80 was the last visionary language? 15 July, 2010 Anyone with an interest in the continuing role and development of Smalltalk has had lots to chew on over the past few days. As part of a series of investigations into the most widely-used programming languages, Computerworld Australia has published a conversation with Alan Kay about his role in the development of the “foundation of much of modern programming today: Smalltalk-80″, Object-Oriented Programming, and modern software development. InfoQ is running a series of interviews recorded at QCon London. One of these is a session with Ralph Johnson and Joe Armstrong discussing the Future of OOP, including their take on what Smalltalk got wrong and right. Finally, Gilad Bracha continues to lay out his vision for what he sees as Smalltalk’s successor, Newspeak. His latest post contains encouragement and advice for those interested in porting existing libraries and applications to Newspeak. 27 May, 2010 Following Google’s decision to focus on fewer organisations last year, ESUG co-ordinated a joint application for projects across all Smalltalk dialects this year, and were so successful in this venture that they got approval for 6 projects. You can find out more about the selected projects at the projects page. For the last two weeks or so, students have been talking and discussing with their mentors, reading and investigating about the projects, and perhaps getting an early start on their development work. This was in line with the GSoC deadlines that you can read at the ESUG GSoC site and at the GSoC blog. The organisers have told students to ask in case of problems or questions to their mentors but also to the community through the mailing list, so be prepared to help out with questions and issues that the students may have. Mariano says “Good luck to all students and enjoy this wonderful opportunity you have. Now we are in the best part of the program!”
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660628.16/warc/CC-MAIN-20150417045740-00149-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
4,967
33
https://forums.asp.net/t/1857542.aspx?Hosted+TFS+Server+2012+From+Dynamsoft
code
Last post Nov 09, 2012 04:40 AM by Chloe Han Nov 09, 2012 04:40 AM|Chloe Han|LINK Team Foundation Server 2012 via managed hosting with support for SQL reporting and a dedicated VM Version control and TWAIN SDK developer Dynamsoft has announced the availability of Visual Studio Team Foundation Server 2012 managed hosting plans to provide support for SQL reporting services and SharePoint. Dynamsoft's TFS hosting provides two new plans — the managed hosting plan is provided with a dedicated VM, and a shared TFS hosting plan is also available, both based on Microsoft's TFS Server 2010 and 2012. TFS Hosting Plans and Pricing>> NOTE: Part of Microsoft's application lifecycle management masterplan, Visual Studio Team Foundation Server 2012 (TFS) is the collaboration platform that supports Agile development practices, multiple IDEs and platforms (locally or cloud based), and provides developer tools for project management. Dynamsoft's TFS hosting service includes version control (or source control) software configuration management and project management. The VM provided with the service offers users additional resources, such as their own database and other independence from shared hosting customers. Hosted within a SAS70 and CICA5970 certified data center in Vancouver, guaranteed bandwidth is 100 Mbps with no upload, download, or flow rate limitations. Dynamsoft's online uptime rate is 99.9x percent. The shared TFS hosting and managed TFS hosting services are also provided with 24x7 Dynamsoft technical support via online chat, phone, or email.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00392.warc.gz
CC-MAIN-2021-10
1,565
14
http://www.ssnakess.com/forums/general-discussion/25150-wc-disgrace-2.html
code
Thanks, Invictus and Lisa. SerpentLust, that's unfortunate, sounds like you're better off being out of there. Silly of your boss to let you go too because many new herp owners don't realize they need some of the stuff they do and by you helping them, it may lead to extra sales. Also, a well informed owner is much less likely to bring a herp back because of health problems or because they realize it isn't suitable for them. Finally, if the customer knows you're knowledgeable and helpful, they will hopefully become repeat customers for all their herp needs from heat bulbs to possibly another pet. Life is uncertain, eat dessert first
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00372-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
638
3
http://ojer.net/6-web-scraping-tools-to-obtain-data-without-coding/
code
1. gmail email id list free download Outwit Hub: Being a new well known Firefox off shoot, Outwit Heart can be downloaded and integrated along with your Firefox browser. The idea is a strong Firefox add-on which includes are available with plenty of website scraping capabilities. Out of the box, there are quite a few data point reputation capabilities that will get your own personal task done instantly in addition to easily. octoparse chrome extension Taking out the info from different internet sites together with Outwit Hub does not call for any programming abilities, plus that’s what makes that tool the last choice regarding non-programmers and non-technical all those. It is free regarding cost and makes good apply of its possibilities to be able to scrape your data, with out compromising on quality. 2 . not Web Scraper (a Chromium Extension): It is an outstanding web scraping program to acquire information without any coding. In other words, we can easily say the fact that Web Scraper can be the alternative to the Outwit Hub program. It is specifically available for Google Chrome users and will allow us to set right up often the sitemaps of the way our sites should possibly be navigated. Moreover, it can clean different web pages, along with the outputs are obtained available as CSV files. Spinn3r is the excellent choice for software engineers in addition to non-programmers. It can scrape the full blog, news web site, interpersonal mass media profile in addition to The rss feeds for its users. Spinn3r makes use of the Firehose APIs that manage 95% regarding the indexing and even net crawling works. Additionally , this program allows us to help filter out the results using specific keywords, that may bud out the irrelevant content material in no time. Fminer is 1 of the best, best and even user-friendly web scraping software on the net. It combines tallest 3g base station ideal features and is widely famous for their visible dial, where you could view the extracted info before it gets stored on your hard storage. Whether a person just would like to scrape crucial computer data or perhaps have some web running projects, Fminer will take care of all types of jobs. 5. Dexi. io: Dexi. io is really a famous web-affiliated scraper and even data program. That won’t need anyone to download the software as you can perform your jobs on the web. It is truly a browser-based program that will allows us to conserve often the scraped information immediately to often the Google Commute and Container. net websites. Moreover, it may export your own personal files to help CSV together with JSON formats and facilitates the information scraping anonymously due to its proxy server server.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00010.warc.gz
CC-MAIN-2021-04
2,721
8
https://forums.developer.nvidia.com/t/using-nvv4l2-for-decoding-in-custom-docker/155319
code
Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) - Tesla T4 • DeepStream Version - 5.0(docker) • JetPack Version (valid for Jetson only) • TensorRT Version - 7.0.0(docker) • NVIDIA GPU Driver Version (valid for GPU only) - 450.51.06 • Issue Type( questions, new requirements, bugs) - question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) I would like to use TF with Accelerated GStreamer for RTSP/file decoding. I am currently using docker which has TF and TRT pre-installed. I have tried to installed Gstreamer using official documentation. During gst-inspect-1.0 and gst-launch-1.0 nvv4l2 plugins are missing. I have also tried using the deepstream 5.0 docker but I’m unable to install TF-GPU on it successfully, also I do not require the complete deepstream functionality for this,. I would finally prefer to have a light docker (without extra installations) to test now as I would finally be deploying on a Jetson NX.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300616.11/warc/CC-MAIN-20220117182124-20220117212124-00259.warc.gz
CC-MAIN-2022-05
1,276
11
https://meta.discourse.org/t/could-not-create-ssl-tls-secure-channel-error-when-connecting-to-discourse-api-from-windows-server/128573
code
Turns out we had a very similar issue as described here: Our external legacy application that connects to our Discourse API is running on an old Windows 2008 R2 server. For whatever reason, the Windows server and the Discourse server were unable to agree on a cipher suite after the recent Discourse updates were installed earlier this week. Whether some ciphers were altered during the update, or if this issue coincided with a LetsEncrypt cert renewal at the same time, I don’t know Anyway, rather than edit our Discourse, I was able to add a couple of cipher suites to the Windows server that they both agreed on, again with the help of the SSL Labs link that @Falco shared above I guess this was caused by the change of cipher suites during the upgrade to Debian. I would have expected that my addition of the elliptic curve certificate would have made this work on all older Windows systems, not just IE11. If I’m not mistaken IE11 uses the Windows crypto library…
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00309.warc.gz
CC-MAIN-2022-33
975
5
https://weather.net.nz/latest-news-blog/members-blogs-menu/entry/site-updates
code
Welcome to the NZ Weather Enthusiasts blog area. This is a key area of the site, allowing members to share their thoughts and ideas, and to learn from others experiences. I strongly encourage you to register on the site, come back to the blog area, and post your thoughts and ideas. There are a heap of new things being worked on, on the site. As new things are added I'll update this post with links to the relevant areas. Areas currently on the site are: - Personal blog areas, where you can write about unique weather events in your area. This resource will be useful to yourself and others when learning about the weather - Site wiki - an ever growing list of meteorological terms. An excellent place for starting off - Communities - Regionally based weather communities where you can keep in touch with people in the areas closest to you. - Weather data and Weather site hosting Once you are registered on the site you have access to the communities area. For the other areas if you would like to help build out the content, then please let me know So what are you waiting for. Login and start blogging!
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658901.41/warc/CC-MAIN-20190117082302-20190117104302-00272.warc.gz
CC-MAIN-2019-04
1,108
8
https://rdrr.io/github/timfolsom/hei/
code
Calculates Healthy Eating Index (HEI) scores for National Health and Nutrition Examination Survey (NHANES) data sets to facilitate analysis of demographic and dietary differences. For more information on the HEI metric, refer to Guenther et al. (2014) <doi:10.3945/jn.113.183079>. |Maintainer||Tim Folsom <[email protected]>| |Package repository||View on GitHub| Install the latest version of this package by entering the following in R: Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00443.warc.gz
CC-MAIN-2019-47
554
6
https://davidthorpe.dev/work.html
code
Below is a curated selection of previous works. I'd love to show more, but unfortunately strict NDAs prevent me from divulging the details of some engagements. Nigella Lawson's Foodim App I acted as the early-stage technical partner for Nigella and her Pabulum team define the product direction and execution of Foodim, the food-based social media app. The app is designed and built with performance as a number one priority. Instant scalability was key. This wasn't a small app to be used by a handful of users; the scale to which this would grow would be huge and the software had to handle that. Foodim's runs from a lightweight Laravel application API and has a native Swift iOS application built in collaboration with Ben Dodson. Industry Leading Estimation Tool Hambro Roofing have been using technology to optimise as many parts of their business as possible. I have partnered with them extensively to help create a central platform that all jobs, estimates, accounting, assets and more are managed in. I spent a huge chunk of time working closely with them to help build out a next-generation estimation tool that we are now turning into a standalone digital product to sell within their industry. I was able to collaborate closely with Loris Leiva, a fantastic frontend developer on this project to build the new estimation tool. Product & UX Direction for Sterling Lexicon & Suddath I was hired for my software development experience in the global relocation industry to help Suddath and Sterling Lexicon map out and design their new suite of customer-facing tools and portals. I helped the companies understand their user's requirements and map out the feature sets required for a multi-stage, long term software development project. I was again able to lean on my network of designers and developers and work in collaboration with Ekrem Elmas to provide the internal software development teams with platform brand guidelines and user experience assets.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00176.warc.gz
CC-MAIN-2022-21
1,964
11
http://dancingwithfey.blogspot.com/2013/05/beltane.html
code
Beltane. The day when people dance the May Pole and have fun sexy times. Basically, a giant fertility festival. But I was also taught that it can be more than what immediate comes to mind when you hear the word "fertility." That it can also mean abundance. Or, that it's a good time to celebrate having the means to support yourself comfortably...or that it's a good time to do some work to get to where you have enough to live quite comfortably. Hmm, not sure if that makes sense...? Anyways, because I just have to, here is a song. I was trying to find a particular Omnia piece when I happened across this interesting band called Beltaine. I took an instant liking to them, and decided that they might be appropriate to share today given the day and their name. In other news, I got a call today about a possible job. Interview is tomorrow. Wish me luck. :)
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00358.warc.gz
CC-MAIN-2018-30
859
5
https://tex.stackexchange.com/questions/346161/xindy-no-lettergroup-in-the-index
code
I am using a normal English index with xindy, but I would like to get rid of the Lettergroups at the beginnings of the each group. I was able to work with .ist before, but it does not work with xindy, and searching online I have not found the way to sort it out. I understand that I have to write a simple .xdy file, but what sort of macro should I put in? I have not done any change to the Index, and the index is simply in English. This is the reason for the absence of a MWE.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00549.warc.gz
CC-MAIN-2022-33
478
7
https://apple.stackexchange.com/questions/93186/iphone-doesnt-show-location-in-certain-apps
code
My iPhone 4 has several apps that I use location services on. But one set of 2 apps from the same developer will not show my location. They are enabled in Location Services, and show a grey arrow to show they have requested location, but will not provide a location on their maps. I have a GPS Status app which reports at the correct location, as does google maps etc. The apps have been removed and re-installed. reboots of the hardware have happened. I'm wondering if there is a hardware fault, as the developer of the app reports no similar problems with iPhone users. PS: iPhone 4 A1332 version 6.1.3 (10B329)
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506320.28/warc/CC-MAIN-20230922002008-20230922032008-00338.warc.gz
CC-MAIN-2023-40
613
3
http://liujunming.top/2021/04/17/What-is-the-fPIE-option-for-position-independent-executables-in-gcc-and-ld/
code
在stackoverflow上看了What is the -fPIE option for position-independent executables in gcc and ld?,对其中的答案甚是满意,所以转载到博客中。 PIE(position-independent executables) is to support address space layout randomization (ASLR) in executable files. Before the PIE mode was created, the program’s executable could not be placed at a random address in memory, only position independent code (PIC) dynamic libraries could be relocated to a random offset. It works very much like what PIC does for dynamic libraries, the difference is that a Procedure Linkage Table (PLT) is not created, instead PC-relative relocation is used. After enabling PIE support in gcc/linkers, the body of program is compiled and linked as position-independent code. A dynamic linker does full relocation processing on the program module, just like dynamic libraries. Any usage of global data is converted to access via the Global Offsets Table (GOT) and GOT relocations are added. Let’s see ASLR work on the PIE executable and change addresses across runs: For the one with -no-pie, everything is boring: Breakpoint 1 at 0x40052a: file main.c, line 4. Before starting execution, break main sets a breakpoint at Then, during both executions, run stops at address The one with -pie however is much more interesting: Breakpoint 1 at 0x754: file main.c, line 4. Before starting execution, GDB just takes a “dummy” address that is present in the executable: After it starts however, GDB intelligently notices that the dynamic loader placed the program in a different location, and the first break stopped at Then, the second run also intelligently noticed that the executable moved again, and ended up breaking at echo 2 | sudo tee /proc/sys/kernel/randomize_va_space ensures that ASLR is on: How can I temporarily disable ASLR (Address space layout randomization)? | Ask Ubuntu. set disable-randomization off is needed otherwise GDB, as the name suggests, turns off ASLR for the process by default to give fixed addresses across runs to improve the debugging experience: Difference between gdb addresses and “real” addresses? | Stack Overflow. Furthermore, we can also observe that: readelf -s ./no-pie.out | grep main gives the actual runtime load address (pc pointed to the following instruction 4 bytes after): 68: 0000000000400526 21 FUNC GLOBAL DEFAULT 14 main readelf -s ./pie.out | grep main gives just an offset: 68: 0000000000000750 23 FUNC GLOBAL DEFAULT 14 main By turning ASLR off (with either set disable-randomization off), GDB always gives main the address: 0x555555554754, so we deduce that the -pie address is composed from: 0x555555554000 + random offset + symbol offset (750) Another cool thing we can do is to play around with some assembly code to understand more concretely what PIE means. We can do that with a Linux x86_64 freestanding assembly hello world: and it assembles and runs fine with: as -o main.o main.S However, if we try to link it as PIE with ( --no-dynamic-linker is required as explained at: How to create a statically linked position independent executable ELF in Linux?): ld --no-dynamic-linker -pie -o main.out main.o then link will fail with: ld: main.o: relocation R_X86_64_32S against `.text' can not be used when making a shared object; recompile with -fPIC Because the line: mov $message, %rsi # address of string to output hardcodes the message address in the mov operand, and is therefore not position independent. If we instead write it in a position independent way: lea message(%rip), %rsi # address of string to output then PIE link works fine, and GDB shows us that the executable does get loaded at a different location in memory every time. The difference here is that lea encoded the address of msg relative to the current PC address due to the rip syntax, see also:How to use x64 RIP-relative addressing Another fun thing that we can do is to put the msg in the data section instead of .o assembles to: e: 48 8d 35 00 00 00 00 lea 0x0(%rip),%rsi # 15 <_start+0x15> so the RIP offset is now 0, and we guess that a relocation has been requested by the assembler. We confirm that with: readelf -r main.o Relocation section '.rela.text' at offset 0x118 contains 1 entries: R_X86_64_PC32 is a PC relative relocation that ld can handle for PIE executables. This experiment taught us that the linker itself checks the program can be PIE and marks it as such. Then when compiling with GCC, -pie tells GCC to generate position independent assembly. But if we write assembly ourselves, we must manually ensure that we have achieved position independence.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00426.warc.gz
CC-MAIN-2021-21
4,616
67
https://learn.marsdd.com/mars-library/general-telephone-screening-questions-sample-template/
code
MaRS Library General telephone screening questions: Sample template In the hiring process, once you have sourced candidates, the next step is to screen potential candidates. As part of this stage, it is advisable to conduct initial telephone interviews to vet candidates. This way you can eliminate unsuitable applicants and not waste either party’s time with a face-to-face interview. Having a brief list of general questions prepared in advance will help you qualify or disqualify candidates according to your hiring needs (for example, salary requirements, availability, communication skills). The following document is a sample list of general telephone screening questions. Sample of a general telephone screening template - Sample job description for an administrative assistant. - Removing barriers: Accessibility for Ontarians with Disabilities Act (AODA). - Terminating employees respectfully: When a startup faces firing an employee. - Convertible preferred stock: How investors maximize their return on investment (ROI). - Using video interviewing and phone screening in the recruitment process.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528481.47/warc/CC-MAIN-20190420020937-20190420042937-00342.warc.gz
CC-MAIN-2019-18
1,108
9
http://www.openfsg.com/index.php/Install_MYSQL
code
|Please help improve this article by expanding it. Further information might be found on the talk page. This article describes how to install MYSQL from Optware. You should have read and understood: - Use the Custom Ipkg Installer - Disable Firmware-MYSQL on web-interface or use different port for new MYSQL in my.conf. ipkg update ipkg install mysql Editing config file You can find config file at /opt/etc/my.conf. Standard user and password is same as root account. Restart MYSQL server Only if changed config file: /opt/etc/init.d/S70mysql stop /opt/etc/init.d/S70mysql start
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705957380/warc/CC-MAIN-20130516120557-00087-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
580
12
http://seperohacker.blogspot.com/2012_04_01_archive.html
code
This program will automate the entire process, and may be considered the semi-official installer for Django-nonrel with GAE. You will need to run it on the command line with the argument of your desired django-nonrel branch. The available arguments are: --master to install django-nonrel 1.3 master(This option will be removed in future releases.) - --dev13 to install django-nonrel 1.3 development - --dev14 to install django-nonrel 1.4 development - --dev15 to install django-nonrel 1.5 development - --clean to erase the current install The installer should be run in the directory where it exists. Here is an example of how to install 1.5 development in a terminal $ python dj_nonrel_install.py --dev15 Installing 1.3 development in a terminal $ python dj_nonrel_install.py --dev13 The folder "django-nonrel" will be created, and all files will be downloaded/installed within the folder for use with Google App Engine. After it's finished, just move/rename the directory to where ever you want the permanent location to be. Speed up creation of your development environment and get on to making awesome django apps! Any improvement suggestions or submissions are very welcome. This is tested on Linux, and will likely work on similar systems. Django Documentation - http://docs.djangoproject.com/en Django-Nonrel Documentation - http://docs.django-nonrel.org Django-Nonrel Mailing List - http://groups.google.com/group/django-non-relational Update 2012 Apr 28: Update 2012 May 01: Update 2012 May 15: The installer has now been modified to work with all operating systems. For systems that don't support symbolic links, the libraries will be moved directly inside of the Django-Testapp folder. Update 2012 May 28: Support has been added for "master" and "develop" branches. Master is installed by default. To install development branches of django-nonrel, use the command line argument --dev. Update 2012 Jun 4: Added Reference Links. Update 2012 Sep 20: Fixed problem with downloading development branches. Update 2012 Dec 25: Branch 1.4 is now available to install. Changed to install all contents into a folder. Update 2013 Mar 14: Fix for djangoappengine library. Switched source hosting to github. Update 2013 May 19: Django-nonrel 1.5 now available to install. Fixed issue with urls and unittests. Updated program description. Update 2013 May 20: I've just been informed that "master" branches will retired and all branches will be considered development. The default install branch will be moving to 1.5 development.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704645477/warc/CC-MAIN-20130516114405-00013-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
2,527
34
https://www.woolplatform.eu/docs/wool-platform/dev/web-service/index.html
code
WOOL Web Service Documentation The WOOL Web Service is a JAVA Spring Boot Application that can be deployed as a web service. It acts as a wrapper around the WOOL Core Library, offering an API that allows you to create client-server dialogue applications. A typical, simple architecture is shown in the Figure below. The components described in the Architecture above are described as follows (from left to right): WOOL Client - Your client application that connects to the WWS in order to render remotely executed WOOL dialogues. WOOL Web Service - the Java Spring Boot Application that can be deployed in a web server. It provides simple user management, and a REST API. WWS REST API - a set of REST end-points for Authentication, Executing WOOL dialogues, and managing WOOL Variables. WOOL Core - the "core" Java Library that contains the software for parsing and executing .wool scripts. This is a collection of POJO’s (Plain Old Java Objects) that can be embedded into any Java or Android application. External Variable Service - Your (optional) web service that may be used to provide just-in-time updates to WOOL Variables. Given the architecture above, a typical scenario for using WOOL in a client-server deployment is as follows. You deploy a WOOL Web Service that has a collection of .wool scripts embedded. You then write a client application that connects to the WOOL Web Server, allowing users to login, start-, and progress dialogues. If your .wool dialogues include WOOL Variables that need to be updated from an external source, implement and deploy your own External Variable Service and connect this to your WWS deployment. build.gradle file may be used to build and deploy the WOOL Web Service to a running Tomcat 10 instance. A detailed installation tutorial is provided here: WOOL Web Service - Installation. After having successfully deployed a WOOL Web Service, you can start exploring its functionalities through the provided Swagger pages. TODO: Add screenshot of Swagger page for WWS. A typical workflow for a client application interacting with the WOOL Web Service is a follows: /auth/loginend-point, providing a username and password to authenticate a user and obtain a JSON Web Token (JWT). Store the JWT, and include in the header ( <your-jwt>) for all subsequent calls. Start the execution of a dialogue, by calling the Render the resulting JSON object as a dialogue user interface to the user, and store the When the user selects a reply, call the /dialogue/progressend-point, providing the previously memorized loggedInteractionIndex, as well as the selected The result is a JSON object with the same structure as received in step 4, so render, rinse, and repeat… WOOL Variables are used in .wool scripts to create dynamic dialogue flow, and include flavourful personalisations. These WOOL Variables can be set and used inside the dialogue scripts themselves, as in the example below: <<set $playerName = "Bob">> Hello $playerName, how are you doing? However, as in the example, it doesn’t always make sense to set the values for WOOL Variables in the dialogue scripts themselves. Instead, these values might originate from another part of your client application. Imagine that your client application is a game that includes a user interface where players can insert their name. When a player does this, the value should be communicated to WOOL, so that the $playerName variable may be used in dialogues. The WOOL Web Service offers the following 2 end-points for sending WOOL Variables values to the service: /variables/set-variable- allowing you to set a single WOOL Variable by providing a `name' and a `value'. /variables/set-variables- allowing you to set a number of WOOL Variables simultaneously by including a JSON payload in the body. Using these, you can inform WOOL about Variables who’s values are generated through any part of your client application. The other way around, your client application can also ask the WOOL Web Service about WOOL Variable values, using the following end-point: /variables/get-variables- allows you to ask for all known WOOL Variables for a user, or a list of specific WOOL Variables (by providing a comma-separated list of variable names). Another way of making sure that WOOL has up-to-date values for WOOL Variables, is by using a WOOL External Variable Service, as explained below. As explained in the Dialogue Execution step, the first thing you need to do before working with the WOOL Web Service is to authenticate. The WOOL Web Service supports two different "modes" of authenticating. Users that are defined in the users.xml configuration file can be given a role which can either be "user" or "admin" (if you don’t specify, the role "user" will be assumed). When you authenticate with the WOOL Web Service (using the /auth/login end-point) as a regular "user", you can perform actions (start dialogues, set variables, etc) on behalf of that authenticated user. However, when you authenticate as a user that has the "admin" role, you can control dialogues (start, progress, cancel, etc) and data (set and retrieve variables) for any "wool user" you specify using the optional woolUserId parameters that are a part of all API end-points. This method of authentication may be used e.g. in a scenario where "clients" don’t directly interact with the WOOL Web Service, but instead connect through a trusted web component that manages a single connection (see Figure below). A WOOL External Variable Service is a web service that may be used by a WOOL Web Service deployment to act as an external source of information for WOOL Variable data. The WOOL Web Service itself keeps track of all WOOL Variables that are set for every individual user. For example, if a WOOL Variable is set in a dialogue using <<set $variableName = "value">> that value is stored. If your WOOL scripts only uses WOOL Variables that are set within the dialogue itself, the WOOL Web Service alone will handle everything. However, if your dialogue contains a statement such as The temperature outside is $temperatureAtUserLocation degrees., the value for $temperatureAtUserLocation is something that would likely need to be fetched from an external component - that is where the WOOL External Variable Service comes in. Every time the WOOL Web Service starts executing a dialogue script, it collects a list of all the WOOL Variables used within that dialogue. The WOOL Web Service may (or may not) already have known values for these variables, but in any case, it will send a request to the External Variable Service to check whether any of the variables require updating. Your specific implementation of the External Variable Service needs to take care of these variable updates. For example, your variable service could in turn call a 3rd party weather API to retrieve the temperature at the user’s location, and return this value to the WOOL Web Service. This flow is outlined in the sequence diagram below: It is worthwhile to make sure that the External Variable Service answers the request for variable updates quickly, because any delay will delay the starting of dialogue execution in the WOOL Web Service - which will negatively impact your end-user’s experience. Apply caching, and make use of the provided updatedTime parameter that is passed along with each WOOL Variable, to make quick judgements whether a variable needs to be updated at all. If you’re ready to start experimenting with your own WOOL Web Service, make sure to check out the relevant Tutorials: |If you found errors or have questions about this page, please consider reporting an issue at https://github.com/woolplatform/wool-documentation or sending an email to [email protected].
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474573.20/warc/CC-MAIN-20240225003942-20240225033942-00612.warc.gz
CC-MAIN-2024-10
7,748
48
https://pythonlobby.com/python-program-to-count-the-occurrence-of-word-is-the-in-text-file/
code
Python Program to Count the Occurrence of Word “is” & “the” in Text File Q. Write a program to count the occurrence of word “is” & “the” in text file in python. # print occurence of word "Is" & "The" def read_data(): f = open("text.txt", 'r') is_ = 0 the_ = 0 s = f.read() x = s.split() for i in x: if i == "is": is_ +=1 elif i == "the": the_ +=1 else: pass print("Occurence of Is is: "+ str(is_)+" and The is: "+str(the_)) read_data() # Occurence of Is is: 0 and The is: 2 Explanation: Here we have defined a function read_data(). Inside read_data() function we have also created a file object “f” and opened our text file in read mode ie. “r” mode. In next step, we have initialized all the values to variable “s“. And used read() function to read our file word by word. The read() function return the output in string format. Note: Here inside function, we have used split() function which returns the list of words available in our text file. Here “s” is our file object. split() method will separate each word from space and store them in list “x“. After that we used for loop to iterate over each word present inside our list “x” and checked for the word “is” & “the” using if-elif-else statement. If word “is” found then we increment our variable “is_” by +1 and if word “the” found then we increment our variable “the_” value by +1. Hence at end, we’ll get the final result. Programming questions on Text Files - WAP to define a method to read text document line by line. - WAP to define a method in python to read lines from a text file starting with an alphabet F. - WAP to define a method to count number of lines starting with an alphabet F. - WAP to define a method which display only those lines starting with an alphabet A or F. - WAP to define a method to display only those lines which are bigger than 50 characters. - WAP to define a method to count total number of characters in our text file. - WAP to define a method to count total numbers of word available in our text file. - WAP to define a method which counts the occurrence of particular word in a text file. - WAP to define a method which only print the words having more than 5 characters. Programming questions on Binary Files
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499700.67/warc/CC-MAIN-20230129044527-20230129074527-00561.warc.gz
CC-MAIN-2023-06
2,271
20
http://ww2.kqed.org/news/2013/11/25/visualizing-complexity-of-health-care-dot-gov/
code
Whether you’ve tried to enroll for medical insurance or not on the much-maligned Healthcare.gov website, you’re sure to have heard how bad it is. Those who have tried to use the federal site have found the sign-up process long and involved, if it works at all. And news reports have suggested that the code that supports the site runs to 500 million lines. I’m not a programmer, but I will go out on a limb and say that sounds like a lot of Red Bull, pizza, all-nighters and chair massages at Healthcare.gov code headquarters, wherever that is. The data visualization site Information Is Beautiful tried to put that mountain of software instructions into context with a graphic that compares that 500 million lines with software that runs other systems we’re familiar with — everything from a typical iPhone app (about 30,000 lines of code) to a cardiac pacemaker (100,000) to a military drone (about 3 million) to Facebook’s back-end code (about 60 million lines) to the genome of the mouse (about 120 million base pairs of DNA). So: 500 million lines? Yes — that’s a whole mess of code. But on further review, programmers quoted by other media sources, such as blogger Andrew Sullivan, say the idea that the software driving Healthcare.gov employs a half-billion lines of code is ludicrous. Here’s the reasoning behind one of the programmer critiques whom Sullivan cites: Over many years of research, programmer productivity in lines of code has been observed to range from 3,200 lines per year for small projects, down to just 1,600 lines per year for very large projects. Using the typical numbers for large projects, 500 million lines of code would require 312,500 man-years of programming effort. If true, that would involve the participation of just about all programmers in the US for a full year, and at an average $100K in salary and benefits, an investment of an amount approaching the entire defense budget! That programmer also notes that the real scandal behind Healthcare.gov is that “the Healthcare.gov web site is a project of moderate scale and complexity. If it really were such a monster, the failure could be excused – but given the modest scale of the actual project, screwing it up so badly is inexcusable!”
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00098-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
2,254
7
https://xlprep.co/qvideo2/2-the-diagram-is-a-speed-time-graph-of-a-train%E2%80%99s-journey-between-two-1605705321028x443407187156664300
code
2 The diagram is a speed-time graph of a train’s journey between two stations. (a) What was the maximum speed of the train? (b) Circle the statement that describes the train’s motion 350 seconds after it left the first station. Accelerating Decelerating Constant speed Stopped at a station (c) Calculate the acceleration of the train during the first 150 seconds of its journey. (d) What was the speed of the train 20 seconds before it completed its journey? (e) How far did the train travel during the first 200 seconds? (f) Calculate the average speed of the train in kilometres per hour during the first 200 seconds.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00324.warc.gz
CC-MAIN-2021-43
623
1
https://tr.ifixit.com/Answers/View/279099/Suddenly+takes+forever+to+open+browser+&+Google+play+store
code
Suddenly takes forever to open browser & Google play store My Blue Studio HD 6.0 was working great then suddenly slowed to a crawl. It takes forever to open any page in browser. Google Play Store sometimes is so slow it won't open at all, takes literally hours to download & install app updates & some now won't update at all. I tried to install wireless update several times, after 30 minutes to download the update and trying to install i get message that update.zip is corrupted and install was aborted. PLEASE HELP! REALLY LOVED THIS PHONE & NOW ITS BARELY USABLE. Is this a good question?
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00405.warc.gz
CC-MAIN-2022-21
593
3
http://www.newsfactor.com/news/Windows-7-Ends-Mainstream-Support/story.xhtml?story_id=022000PV73JI
code
Microsoft announced Tuesday on its support page that it will be ending 'Mainstream Support' and moving to 'Extended Support' for some of its operating systems and servers. Most notably, Mainstream Support for the Windows 7 operating system, including the Enterprise and Enterprise N versions, will end January 13, 2015. At the same time, Microsoft will end Mainstream Support for Exchange Server 2010, Windows Server 2008, Windows Embedded Handheld 6.5, and Windows Storage Server 2008. Mainstream Support for the Windows Phone 7.8 ends even sooner, on September 7 this year. Although Mainstream Support will no longer be offered, Microsoft customers who continue to use those products will still have access to the company’s Extended Support service for another five years, including security updates at no cost and paid hotfix support. However, Microsoft notes that it will not be accepting requests for design changes or new features during the Extended phase. A Possible Reprieve Despite the announcement that it will be ending Mainstream Support for the Windows 7 platform in the next six months, Microsoft has pushed back on announced support deadlines before. Its popular Windows XP platform, for example, had its deadline for the end of Mainstream Support delayed following poor adoption of its successor, Windows Vista. Like Windows XP, Windows 7 has proven to be a popular operating system which many customers have continued to cling to, following the disappointing reception of the Windows 8 platform. According to the International Business Times, half the world's laptops and desktops still run on Windows 7. Since Microsoft will no longer be rolling out new features for the OS, corporate and enterprise customers will have to resort to custom solutions if they want new features for the system over the next five years. Even in the event Microsoft doesn’t postpone the end of Mainstream Support for Windows 7, Extended Support for the widely used OS is due to continue until 2020, and by then, Microsoft will likely have released new versions. Out to Pasture Microsoft has also announced service changes for several other products. Support for its Internet Security and Acceleration Server 2004 Standard Edition will end on October 14, 2014, alongside Windows CE 5.0, the embedded device operating system first released in 2004. The company also announced plans to end Extended Support on January 13, 2015 for all versions of its Host Integration Server 2004 (including its Enterprise version), Systems Management Server 2003, all versions of its Virtual Server 2005 platform, and the Visual FoxPro 9.0 Professional Edition. Microsoft indicates there will be no new security updates, non-security hotfixes, free or paid assisted support options, or online technical content updates for those products after that date. Support for several service packs are also being retired. The Office 2010 Service Pack 1 and SharePoint Server 2010 Service Pack 1 will see their support end on October 14, 2014, while Forefront Unified Access Gateway 2010 Service Pack 3, Visual Studio 2012 Remote Tools, Visual Studio 2012 Test Professional, Visual Studio Express 2012 for Web, Visual Studio Express 2012 for Windows 8, and Visual Studio Express 2012 for Windows Desktop will see their support end on January 13, 2015. Posted: 2015-01-05 @ 11:55am PT How can I find extended support pricing for Sharepoint Server 2010? Posted: 2014-07-11 @ 5:16pm PT @Walt: The first sentence of the story includes a link to Microsoft's support page where you can find more details, and here's another direct link to it: Posted: 2014-07-11 @ 5:14pm PT How to obtain "extended support" is NOT explained. Where can such information be obtained?
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00461-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,731
19
https://bumblingalong.wordpress.com/2010/04/01/nothing-to-see-here/
code
This blog is written by me, and occassionally guest posters. My opinions expressed here are all my own, and are just that - opinions! Please feel free to link to BumblingAlong - in fact I encourage it - but please make sure it's from content that's appropriate. Please leave a comment if you like, or don't like, what I've written - I'd love to hear from you. But please be polite, and be aware that I reserve the right to remove comments which aren't.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890566.2/warc/CC-MAIN-20200706222442-20200707012442-00518.warc.gz
CC-MAIN-2020-29
452
3
https://gpumd.org/dev/gpumd/input_parameters/compute_gkma.html
code
compute_gkma <sample_interval> <first_mode> <last_mode> <bin_option> <size> sample_interval is the sampling interval (in number of steps) used to compute the heat modal heat current. last_mode are the first and last mode, respectively, in the eigenvector.in input file to include in the calculation. bin_option determines which binning technique to use. The options are size defines how the modes are added to each bin. bin_size, then this is an integer describing how many modes are included per bin. f_bin_size, then binning is by frequency and this is a float describing the bin size in THz. compute_gkma 10 1 27216 f_bin_size 1.0 This means that you want to calculate the modal heat current with the GKMA method the modal heat flux will be sampled every 10 steps the range of modes you want to include of calculations are from 1 to 27216 you want to bin the modes by frequency with a bin size of 1 THz compute_gkma 10 1 27216 bin_size 1 This example is identical to Example 1, except the modes are binned by count. Here, each bin only has one mode (i.e., all modes are included in the output). compute_gkma 10 1 27216 bin_size 10 This example is identical to Example 2, except each bin has 10 modes. This computation can be very memory intensive. The memory requirements are comparable to the size of the eigenvector.in input file. Depending on the number of steps to run, sampling interval, and number of bins, the heatmode.out output file can become very large as well (i.e., many GBs). This keyword cannot be used in the same run as the compute_hnema keyword. The keyword that appears last will be used in the run.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816977.38/warc/CC-MAIN-20240415111434-20240415141434-00617.warc.gz
CC-MAIN-2024-18
1,621
21
https://github.com/danielcrenna
code
- tweetsharp 395 TweetSharp is a fast, clean wrapper around the Twitter API. - metrics-net 385 Capturing CLR and application-level metrics. So you know what's going on. - hammock 236 REST, easy. A C# HTTP API client for consuming web services. - hammock2 77 A single .cs file for making munchy munchy API. - oauth 73 A public domain OAuth client library written in C# Repositories contributed to - azzlack/Microsoft.AspNet.WebApi.MessageHandlers.Compression 27 Drop-in module for ASP.Net WebAPI that enables GZip and Deflate support - conatuscreative/boxer 4 Polygon pipeline tool for 2D games - conatuscreative/bell 1 Audio pipeline tool for games - stefanprodan/WebApiThrottle 182 ASP.NET Web API rate limiter for IIS and Owin hosting - readium/readium-js-viewer 150 ReadiumJS viewer: default web app for Readium.js library Contributions in the last year 21 total May 22, 2014 – May 22, 2015 Longest streak 1 day May 23 – May 23 Current streak 0 days Last contributed
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925696.30/warc/CC-MAIN-20150521113205-00114-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
973
14
http://www.araneo.se/blog/category/all/2
code
What you need to get started - Revision Control System - somewhere to keep your code (we will be using git) - CI System - somewhere to define your build process (we will be using Atlassian Bamboo) - Build Server - a place to build and test your code (we will be using Amazon EC2) - Binary Repository Manager - for your Artifacts (we will be using SonaType Nexus) We will also use specific components for automated testing, code quality analysis, documentation and visualization. However, the installation and usage of these components will be explained once they are introduced in the articles to come. About the Demo Project The project is simulating a DID Provider, that is, a supplier of common phone numbers. In this model, the DID Provider holds large pools of phone numbers in multiple countries, which clients can purchase for a nominal fee. To purchase a DID, you must first lock a range of numbers. This lock exists for 15 minutes (not written in stone). Each lock has a unique lock code. From the range of locked numbers, you are allowed to pick one number, which you allocate with an API-call together with the lock code. The locked but non-chosen numbers are returned to the "number pool" of available numbers as soon as you pick a number. The setup is based on a LAMP-stack using CentOS 6.6, PHP 5.3, MySQL 5.6, Apache 2.2 and SOAP APIs using WSDL.. For more information, you can check out my article on how to setup a virtual LAMP-stack on VirtualBox!
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00033.warc.gz
CC-MAIN-2019-47
1,465
9
https://data.eol.ucar.edu/dataset/2.146
code
TOGA COARE soundings derived from NCAR Electra flight level data This dataset consists of soundings derived from National Center for Atmospheric Research Electra flight level data. Data were taken during events of opportunity. The data were processed through the visual quality control process. The automated and spatial quality control processes were not conducted on this dataset. This dataset includes pressure, temperature, dew point, relative humidity, wind speed, wind direction, and altitude taken at five second intervals along the flight path. Refer to the station README file for details. |Subscribe||Subscribe to receive email when new or updated data is available.| |Frequency||no set schedule| |GCMD Science Keywords| |Begin datetime||1992-11-15 00:00:00| |End datetime||1993-02-19 23:59:59| Map data from IBCSO, IBCAO, and Global Topography. Maximum (North) Latitude: Minimum (South) Latitude: Minimum (West) Longitude: 154.00, Maximum (East) Longitude: 168.00 Additional contact information CitationExample citation following ESIP guidelines: UCAR/NCAR - Earth Observing Laboratory. 2011. TOGA COARE soundings derived from NCAR Electra flight level data. Version 1.0. UCAR/NCAR - Earth Observing Laboratory. https://doi.org/10.5065/D6BK19NH. Accessed 09 Dec 2023. Today's date is shown: please replace with the date of your most recent access. Additional citation styles The citation text below is from the DataCite Content Resolver service and may take a few seconds to load. The styles and locales are obtained from CrossCite, which also provides a citation formatter. See ReFindit for another alternative. Formatting is not perfect: please verify and edit before use. Today's date is shown: please replace with the date of your most recent access.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100873.6/warc/CC-MAIN-20231209071722-20231209101722-00076.warc.gz
CC-MAIN-2023-50
1,765
17
https://sharepoint.stackexchange.com/questions/10441/ip-tracking-on-user-account/10443
code
Is there any way to track/trace/find an IP that a user account has logged in as? I feel like I have a user logging into the wrong account and is doing some damage... This will hardly be a SharePoint feature. I am no expert on Windows Server but it could be done by using the audit logs from your domain controller. Configured correctly it should log all logins and MAYBE the IP that belongs to it. You should be able to find better help on an Windows Server forum. What you could do additionally is turn on audit logging for SharePoint to check what that user does.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00017.warc.gz
CC-MAIN-2021-04
565
4
https://mobecls.com/magento-extensions/
code
How to find the best Magento | Adobe Commerce extension for an online store? Here’s a list of tips to save your precious time. We’ve created this list using our decade-long eCommerce experience and our clients’ frequently asked questions. Before Searching Extensions for Magento | Adobe Commerce: - Specify the problem. If the problem is diversified, try to divide it into small issues that you need to solve. - Inspect your current customizations. Check if there were changes in templates, checkout, products, and other sections. - Use a support team to find the most suitable solution. Visit an extension provider website and contact the support team. Tell them your business issues and ask for advice. Before Buying Extensions for Magento | Adobe Commerce: - Be sure that the chosen extension solves your issues. If you consider buying a complex extension to resolve a little problem, you may face additional code and incompatibility issues. - Magento | Adobe Commerce extensions can conflict with current customizations at your store. Get the information about the extension’s specification to avoid extra time and money expenses on conflict resolving. - Be ready to test the extension the day after the purchase. Before the Installation: - Prepare development environment to test the extension’s functionality. It’ll help to avoid possible delays and troubles. After the Installation: - If some features of the installed extension don’t work, contact the extension provider. A support team will fix the errors within an hour and for free. - You can use the money-back period if the extension doesn’t fulfill your business needs. - If you need additional features, consider Magento customization services. There’s no ultimate extension that solves all the problems.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510481.79/warc/CC-MAIN-20230929022639-20230929052639-00826.warc.gz
CC-MAIN-2023-40
1,787
15
https://www.pluralsight.com/newsroom/press-releases/pluralsight-skills-expands-hands-on-learning-capabilities
code
Pluralsight Skills Expands Hands-On Learning Capabilities for Cybersecurity, IT Ops, and Software Development; Launches New Tech Certification Experience Oct 12, 2021 SILICON SLOPES, Utah — Pluralsight, Inc., the technology workforce development company, today announced an expansion to Pluralsight Skills, with the addition of new lab-based hands-on learning experiences for cybersecurity, IT ops, and software development professionals as well as a new experience designed to help technology learners prepare for the tech industry’s top certifications. Pluralsight Skills’ addition of hands-on learning capabilities in high-demand technology skills such as cybersecurity, IT ops, and software development complements the platform’s extensive library of on-demand video content from the world’s top tech instructors. Hands-on learning is a critical component to any effective skills development platform and strategy. Knowledge retention and application from hands-on learning experiences is dramatically improved over alternative learning modalities, and the combination of both hands-on and on-demand skill development techniques gives enterprises the tools they need to effectively develop technology skills at scale. “The most successful enterprise organizations have a structured approach to skill development that includes a combination of hands-on, on-demand, and instructor-led training, live or virtual. This expansion of hands-on learning capabilities enables our customers to more effectively develop their tech talent at scale and ensure that they have the skills inventory to complete their most pressing technology projects in an effective and cost efficient manner,” said Gary Eimerman, GM of Skills, Pluralsight. As enterprises work to tackle new technology advancements and ways of working following the onset of the 2020 pandemic, skills gaps have widened in critical technology areas such as cloud computing, data, cybersecurity, AI, and machine learning. In Pluralsight’s recent State of Upskilling report, technology professionals stated that their confidence to do their current jobs (down 13%) as well as their capacity to do their jobs in the next three years (down 8%) both decreased from a year ago. With the introduction of these new labs, Pluralsight Skills now offers more than 900 lab-based hands-on learning experiences for technologists looking to close skills gaps and stay ahead of the rapid pace of change for today’s technologies. These hands-on learning opportunities include: 772 labs for cloud computing 21 labs for software development 63 labs for cybersecurity 44 labs for IT ops A More Clear Path For Certifications In addition to its expansion of hands-on learning experiences, Pluralsight is introducing a new tech certification landscape within Pluralsight Skills called the Certification Prep Center. This new experience enables learners to survey more than 130 certification prep paths offered within Pluralsight Skills and get clear step-by-step guidance on how to develop the necessary skills to pass the certification exams. For more information about how Pluralsight Skills partners with enterprises to eliminate skills gaps, please visit www.pluralsight.com. Pluralsight is the leading technology workforce development company that helps companies and teams build better products by developing critical skills, improving processes and gaining insights through data, and providing strategic skills consulting. Trusted by forward-thinking companies of every size in every industry, Pluralsight helps individuals and businesses transform with technology. Pluralsight Skills helps enterprises build technology skills at scale with expert-authored courses on today’s most important technologies, including cloud, artificial intelligence and machine learning, data science, and security, among others. Skills also includes tools to align skill development with business objectives, virtual instructor-led training, hands-on labs, skill assessments and one-of-a-kind analytics. Flow complements Skills by providing engineering teams with actionable data and visibility into workflow patterns to accelerate the delivery of products and services. For more information about Pluralsight, visit pluralsight.com.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819847.83/warc/CC-MAIN-20240424174709-20240424204709-00061.warc.gz
CC-MAIN-2024-18
4,273
14
https://docs.cpanel.net/knowledge-base/third-party/select-a-php-binary/
code
Select a PHP Binary Last modified: May 13, 2020 The cPanel Server Daemon ( cpsrvd) must know the path to the specific PHP binary that you wish to use to process PHP scripts (for example, phpMyAdmin). There are several PHP binaries from which you may choose. The cPanel-provided PHP binary /var/cpanel/usecpphp file, when it exists, causes the cpsrvd daemon to use a non-system PHP that cPanel, L.L.C. provides. You may wish to use this functionality if Apache’s version of PHP does not include all of the features to run inside of the cpsrvd daemon. This PHP binary contains all of the necessary options to run inside of the The cPanel-provided PHP binary exists on the system as one of the following files: php-cgifile over the php file if both are available and executable. PHP binary use /var/cpanel/usecpphp does not exist, WHM uses the PHP binary ( /usr/local/cpanel/3rdparty/bin/php-cgi). You can modify the behavior in the /var/cpanel/3rdparty/bin/php file. If the binary exists, is executable, and the /var/cpanel/usecpphp file exists, then the cpsrvd daemon will always use this binary. Use the following flowchart to determine which PHP binary the cpsrvd daemon uses: cPanel & WHM PHP binary flowchart - The system prefers the php-cgifile over the phpfile if both are available and executable. - The system only uses the /usr/bin/phpfile if the usecpphpfile does not exist and the request is for a third-party product.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652569.73/warc/CC-MAIN-20230606114156-20230606144156-00222.warc.gz
CC-MAIN-2023-23
1,430
26
https://sourceforge.net/directory/language:tcl/language:plsql/language:java/
code
Lose the spreadsheets. Get Infoblox DDI: Consolidate DNS, DHCP, IP address management into a single platform. Orchestrate DDI functions for hybrid & public cloud & virtual & private cloud environments. Access rich reporting & analytics capabilities for capacity planning & asset management. Boost IT efficiency & automation with seamless RESTful API integration.Sponsored Listing - Audio & Video - Business & Enterprise - Home & Education - Science & Engineering - Security & Utilities - System Administration Benedetto provides a catalog system for libraries. It has a GUI for registration of media, searching and printing of data.1 weekly downloads Project KVoIP is intended for the account of IP telephony calls. The project is written on Java and will consist of three parts: an application server, a radius-server and a client part. ReadyESB is one of Enterprise Service Bus which real and available. It depends on Axis2/(Java/C), WebLogic platform and Oracle Database. ThousandEyes extends visibility across corporate networks as well as the public Internet, helping to solve issues from the branch through MPLS links and SIP trunks to service provider networks. Simulate pre-deployment capacity, monitor detailed performance metrics and see how QoS settings impact call quality.Sponsored Listing
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00470.warc.gz
CC-MAIN-2017-22
1,302
11
https://www.graduate.technion.ac.il/Theses/Abstracts.asp?Id=30446
code
|Ph.D Student||Leviant Ira| |Subject||Multilingual Word Embeddings: Evaluation and Template-Based| |Department||Department of Industrial Engineering and Management||Supervisor||ASSOCIATE PROF. Roi Reichart| |Full Thesis text| In recent years, the Natural Language Processing (NLP) community interest has been drawn to the development of Vector Space Models (VSMs) of semantics. These models map lexical units such as words, phrases or sentences into vectors, allowing NLP algorithms to compute semantic distances between these units. Most VSMs are based on the distributional hypothesis, stating that words that occur in similar contexts tend to have similar meanings. Our focus in this thesis is on multilingual word meaning representations. Word meaning representation (or generally called word embedding) is a mathematical object associated with each word, often a vector. Multilingual word representations map between the word embedding spaces for different languages, or a common word embedding space for all languages enables a shared semantic space that reveals word correspondences across languages. Humans as well as VSMs may consider various languages when making their judgments and predictions.The resulting models are evaluated either in an intrinsic human based evaluation, where human scores are most often produced for word pairs presented to the human evaluators in English, or in application based evaluation with tasks such as cross-lingual text mining, document classification and sentiment analysis. In this thesis we focus on human based evaluation, where a correlation between the model scores and the human scores is computed. We show significant differences in human based evaluation across languages and establish the importance of the judgment language (JL), the language in which word pairs are presented to human evaluators, on human semantic judgments and on their correlation with VSM predictions. Next, we introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering datasets for 12 typologically diverse languages. Finally, we present a fully unsupervised algorithm SG-IWE for the extraction of patterns which is suitable for capturing word similarity and is easily adjustable to a multilingual setup.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00130.warc.gz
CC-MAIN-2021-39
2,262
10
https://sposointe.eu/content_m1/138.php
code
Example of real estate contract form. Which documents are or are not part of the contract documents. Where in the set of documents certain topics are covered. All items including pricing, terms and modes of payment, and amounts to be paid must be written in a precise and easily understandable manner. Properly identify the parties involved in the contract. People who may not be well aware of the right way to draw a contract can make use of contract examples for the purpose. This catering contract example can come in handy in such cases as this gives the user a clear idea of the segments to be included in the contract. These contract forms give them an idea of the format and the content to be added to the document. Browse through them so you can have the reference that you need within the development of your own contract document. As an example, the content of an agent contract is very different from that of a behavior contract. If everything falls into place, both of them should sign a contract to avoid any confusion later. You may also see videography contract examples in pdf. A party organizing an event may approach a catering agency.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648343.8/warc/CC-MAIN-20191013221144-20191014004144-00400.warc.gz
CC-MAIN-2019-43
1,153
3
http://digitalwisdom.net/dwi_thcm.htm
code
Frequently Asked Questions - CoolMaps For additional information, please see the Mountain High Maps Online User Guide. Q:I cannot drag and drop between the browser and my application. A:This is generally a question of memory available to move the CoolMaps images between browser catalog and the application - these images can be very large - as much as 14 Mbytes for the JPEG images. A way to test this is to drag and drop a smaller image. If the drag and drop works for smaller file size images then it is a memory problem; if the drag and drop does not work, the application is probably not supported for drag and drop. Place modules are included for Photoshop, Illustrator - others may have been added as they became available. A further word of caution - the drag and drop operation with large files may take a lengthy period of time. Q:How can I get Adobe PageMaker to display an image instead of a grey box? Q:In QuarkXpress, there is difficulty printing from the JPEG CoolMaps. Basically you cannot print unless you converts to TIFF. Is this correct? A:The problem is not that it's a JPEG - it's that it is in RGB mode (we are assuming that separations are required because RGB JPEGs will print as colour composites straight from QuarkXPress - on an ink jet printer for example). To print as separations the map must first be changed to CMYK mode (in Photoshop or some other image-editing application). It can then be saved as a TIFF / JPEG / EPS or whatever else QuarkXPress can import. The preferred format for separation output is EPS DCS (five files for each image), which can be output on an imagesetter much faster than any other format. The reason that it is in RGB, is a) that not everyone wants CMYK files (multimedia and web users for example), and b) those who do want CMYK format usually prefer to convert to CMYK after they have done the manipulation work (smaller files thus faster), and also use their own separation tables (there are many) depending on what the print method is to be. Any comments or problems with this let us know - thank you! © Digital Wisdom, Inc.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817688.24/warc/CC-MAIN-20240420214757-20240421004757-00358.warc.gz
CC-MAIN-2024-18
2,091
13
http://www.jaguarboard.org/index.php/cn/forum-cn/troubleshooting/607-debian-crashing.html
code
We have around 20 JaguarBoards running the Debian install preloaded along with Gnome, and on August 1st every one of them crashed. Whenever we plug in a wifi card it goes down. I've narrowed it down to a display issue. If I adjust the resolution on the monitor I can get it to at least accept the wifi card, but as soon as a new window tries to open or I expand a windows/ click settings / try to connect to a wifi network / etc. The screen goes black and the board restarts. Any suggestions to fixing a resolution / video card / issue or whatever it may be on these boards. I've done an apt-get update && apt-get upgrade, I have also tried didnt wifi card chipsets such as Realtek and Atheros with no luck. After much more diagnosing it looks like every time a window is maximized or any window tries to take the real estate of the screen it locks up and crashes. I've tried different monitors and different resolutions with mixed results. Maximizing ALWAYS crashes it, but some windows and programs are able to open by using monitors with different aspect ratios. So this seems like it has to be a Display Driver / or X Windows issue. Still trying to come up with a fix. I am not sure what is the reason to cause those issues you faced. I have tested debian 8.1 with wifi dongle EDUP EP-N8508GS (chip RTL8188ucs), everything is fine, no crash at all. To connect a monitor, I use a HDMI to VGA adapter, because no HDMI on my monitor. May I know more details about those boards? When did you get it? After you get it, you installed Debian instead of preloaded Fedora 22? After you installed debian, it will crash soon? Before it crash, what did you do? If possible, add my skype id: tim.jjune, we can talk in details. I've requested you on Skype. What we need is a stable Linux GUI with Java support. The only thing we use theses boards for is to run a .jar file . I was not able to get the pre-loaded revision of Fedora to run a .jar file. If you have an image of Fedora with Gnome or KDE that has Java installed this is really all we need. Do you have any images you could provide of Fedora that has GUI ? It seems that just about every time I go through the steps of installing a GUI of any flavor of Linux on the JaguarBoard it becomes unstable, Debian 8 is so far the only one we had working and now it has failed as well. I have even tried a Live boot version of Fedora 23 and when it gets to the desktop it crashes (Note this is from a freshly load image on a flash drive with no changes. I briefly see the desktop and then I get No Signal on my monitor). I have also tried a Live boot of Fedora 22 with KDE , it will boot to the desktop but when I click on Settings it crashes. Hopefully we can discuss more on Skype. I am Eastern Standard Time. I got your invitation, but we have 12 hours difference. Did you monitor the RAM status when you run GUI with Java environment? I am afraid 1GB RAM is not enough to handle that. If no jave running, only Debian with desktop, will it work stable? I can't even get the GUI to run without Java. I will monitor the RAM on Debian 8 to see if that is an issue, but it seems to crash consistently when opening windows on a fresh install. LCD Monitor's seem to be an issue as well as I've seen other users have issues with certain HDMI monitors. We've tried 5 different monitors and various cables, HDMI to HDMI, HDMI to DVI and HDMI to VGA, each combination seems to produce different crashes. Realtek Chipset Wifi Adapter's seem to crash the board as soon as I plug them in. Atheros chipset adapters are working. I've attached the syslog. Please note that even removing our scripts... /home/jag/KERNEL && bash launch_kiosk_run_loop.sh , still doesn't fix the other crashes. I do not understand. Do you mean you can not run desktop on debian 8, no matter you install java or not, it will crash? I have tested debian 8.1 with desktop for over 48 hours before, no crash at all. To verify it is because of hardware issue or software(OS/applications configuration) issue, can you try to run Fedora server 22 for long time (48 hours)? Check if it will fail or crash, if everything is fine with fedora, that means it is software issue, you need to figure it out.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00483.warc.gz
CC-MAIN-2022-40
4,201
16
http://www.glocktalk.com/forums/showthread.php?p=19787586&mode=threaded
code
new show "Amish Mafia" on Discovery Channel? Anyone seen this? There is no freakin way this is real. It has all the hallmarks of a well written, yet totally unbelievable "Reality" show. I'm suprised that a channel like the Discovery Channel would do something this terrible....
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00638-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
277
3
https://ec2-52-90-15-253.compute-1.amazonaws.com/best-how-do-i-find-vessel-information-api-in-json/
code
Where Is It Located? You can access Vessel Information API from anywhere in the world. The API is available in JSON format and is simple to use. Simply register with the API provider, get an API access key, and begin making API calls. With this API, you will get details about: Name of shipSternIdShip Flag IMO numberCall SignMMSI Vessel detailsImages and much moreWhy Is The Location Important?The location is important because it enables the system to find and track ships across the globe. In addition, it provides a helpful foundation for other features, such as tracking vessel movement. It also signifies the location of the ship’s home port, which is where it originates from. And finally, it identifies the port where the ship is currently located. What Types Of Information Can I Get From This? For example, you can get all information about vessels: name, type of ship, flag state; country of registration; country of owner; draft; GT; NT; length; beam; engine power; and more. You can also get information about vessels by a specific id number and other information. And all that information is provided in JSON format that are very easy to read and understand.Vessel Information API works with all types of vessels: bulk carrier, container ship, passenger ship, tanker ships, offshore support vessel, and warships. And more.Finally, you can get shipping data from any point on the globe. This data can be obtained from any computer with an internet connection and a computer running on a Linux or Mac operating system. And also can be received via email or any other messaging service including SMS. And Other Information About VesselsIn addition to providing access to live information on vessels around the globe, Vessel Information API also supports all major programming languages. This means that developers have the freedom to choose the language that best suits their needs or project requirements. Also you can use this API for many purposes such as for maritime security or for tracking vessels for their global shipping company; keeping tabs on cargo vessels for their marine insurance company; and more! This global maritime intelligence service covers everything from commercial rigs to warships; environmental protection projects to offshore installations and everything in between! By partnering with organizations around the globe and receiving live feeds from their cameras, Vessel Traffic Information API provides all-in-one maritime security services by tracking vessels across international boundaries Vessel Information API gives you information about all globally live on board vessels or by the range of area. You can see the list of all globally live on board vessels or by range of area. You can get individual ship detail info with ships photos. To make use of it, you must first: 1- Go to Vessel Traffic Information API and simply click on the button “Subscribe for free” to start using the API. 2- After signing up in Zyla API Hub, you’ll be given your personal API key. Using this one-of-a-kind combination of numbers and letters, you’ll be able to use, connect, and manage APIs! 3- Employ the different API endpoints depending on what you are looking for. 4- Once you meet your needed endpoint, make the API call by pressing the button “run” and see the results on your screen.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817146.37/warc/CC-MAIN-20240417075330-20240417105330-00208.warc.gz
CC-MAIN-2024-18
3,332
13
http://dev.cityofchicago.org/blog/page2/
code
With the Smart Green Infrastructure Monitoring pilot project coming to an end, we have turned off updates for the dataset that published the sensor observations on a near-real-time basis. This dataset contains almost 14 months of data and will remain on the Chicago Data Portal for historical reference.Read more We have updated the Chicago Public Schools - Safe Passage Routes SY1718 dataset. The Chicago Public Schools recently added 14 routes and removed one. Please use the Contact Dataset Owner link in the dataset for any questions.Read more The 311 Service Requests - Alley Lights Out dataset has not updated since 12/23/2017 due to a technical issue. We are investigating and attempting to fix the problem. The other 11 datasets in the 311 Service Requests series are updating properly.Read more As some users have noted, about five percent of records in the Building Permits dataset showed no ISSUE_DATE. This is due to some complexity in the workflow for issuing building permits, leading to some situations where the database field we used for ISSUE_DATE does not get populated in the source system. We have updated the Boundaries - City geographic dataset with a slightly updated version of the City of Chicago boundary. The changes primarily relate to land acquired for O’Hare airport modernization and have been in effect for quite some time but we only recently realized they were not reflected in this dataset.Read more
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113174535-00046.warc.gz
CC-MAIN-2018-47
1,437
7
http://mmb2018.org/authors.html
code
Authors / Abstracts Submit an Abstract Since the beginning, MMB conferences are highly focused and interactive meetings that gather people that want to do biology using microtechnologies. Therefore, the conference format was chosen to foster these interactions. The MMB 2018 Conference is comprised of 6 Keynote Presentations. We are currently accepting abstracts for the contributed portion of this conference. Please view the Abstract Classification List for the fields in which we are seeking papers. The contributed presentation format for the MMB 2018 Conference is in flash oral/poster format only. Authors are required to give a "Flash" 60-second presentation of their poster to the general audience of the conference the morning before their corresponding poster session. Please keep in mind that the final version of the submitted abstract will be distributed at the conference. Flash presentations consist of a maximum of two (2) Power Point slides that should draw the attention of the attendees and motivate them to visit your poster during the Poster Session. Please keep in mind that the technical digest for MMB 2018 will not be printed and it will be available as a download only approximately two weeks prior to the Conference date. For those who plan to submit patent applications, the date when the content of your paper will be disclosed to the public is March 12, 2018. Abstract Deadlines and Due Dates (No Deadline Extensions): Late News Abstract Deadlines and Due Dates (No Deadline Extensions): All dates end 23:59 Honolulu, Hawaii, USA time.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00628.warc.gz
CC-MAIN-2018-05
1,566
10
https://confengine.com/conferences/selenium-conf-2020/proposal/14836/q-a-with-the-selenium-committee
code
Q&A with the Selenium Committee Q & A with the Selenium Committee moderated by Anand Bagmar schedule Submitted 10 months ago People who liked this proposal, also liked: Anand Bagmar - Test Automation of Real-Time, Multi-User GamesAnand BagmarSoftware Quality EvangelistEssence of Testing schedule 1 year agoSold Out! Challenges in Testing & Automating Games Testing real-time, multi-user games built for native apps and / or browser-based on phones / mobile devices / tablets / desktop browsers makes testing of regular products as apps, or websites appear like a piece of cake. Testing such real-time and multi-user games becomes even more challenging when you think about automating the same. I got an opportunity to build Functional Test Automation for a suite of games – and what an exciting time it turned out to be! These games are built either using Cocos2d-x, or Unity (cross platform game engines for mobile games, apps and other cross platform interactive GUI and are known for their speed, stability, and ease-of-use). The key challenges I encountered here were: - Millions of users, playing games on a huge variety of devices (Android & iOS native apps, Mobile-Web, and Desktop Web) - Limited unit testing - API testing & Functional Testing done in isolation (mini-waterfall approach) - Usage of Cocos2d-x & Unity for game rendering – which cannot be automated via Appium - Limited Functional Automation (for native apps) An approach to Functional Automation of Real-Time, Multi-User Game scenarios I overcame the above mentioned challenges by doing the following: - A better way of working (you can call it ‘Agile’) - Break down the walls by fostering a mindset of “build quality-in, as opposed to test for quality” - Built a new functional automation framework using java / testing / appium-test-distribution / reportportal / jenkins with focus on – specify test intent once and run on all supported channels (ex: Android, iOS, Mobile-Web, and Web) - Built a solution for Cocos2d-x layer automation - Created a vision of CI-CD for the organization, and setup code-based CI pipelines to enable end-2-end visibility - Made the framework extensible by providing ability to use same framework for multiple games The focus of this talk will be to: - Share an example of a particular use case - Share the solutions, including code snippets, implemented for: - Functional Test Automation Framework Architecture & Design - Ease of Test Implementation, while maintaining code quality and promoting reuse - Test Execution on local Vs CI, on-demand as well as on every new build
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00464.warc.gz
CC-MAIN-2021-31
2,595
30
https://carterkaplan.blogspot.com/2016/04/peninsulas.html
code
We are speechless In a state of linguistic penury Dangling from the Of syntactical specific gravity Into a wash of inarticulateness Hanging on by the narrow bridge Of the speech act itself, if at all. We are was-land, were-land Is-land, will-land, would-land Wood-land, main-land, plain-land, Plane land: disembark through the bridge.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102469.83/warc/CC-MAIN-20231210123756-20231210153756-00350.warc.gz
CC-MAIN-2023-50
334
11
https://www.roseindia.net/answers/viewqa/Java-Beginners/6893-HashMap/HashSet.html
code
HashMap/HashSet - Java Beginners HashMap/HashSet im working on a game, and i want to know how does the code for HashMap and HashSet work or can you give me the code that needs to be included in the game engine. I have the exam on monday. Can you please help HashSet In Java the Set interface. This class is supported by the instance of HashMap. HashSet allows to insert the null element. HashSet is not thread safe i.e. in multiple...HashSet In Java In this section we will read about HashSet in Java.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00191.warc.gz
CC-MAIN-2018-13
501
7
https://www.freelancer.com.ru/projects/php-engineering/quick-php-tutur-for-working/
code
OK, here is the thing, I'm sick of searching online. I need to be able to work with php urls, you know how you have php?... urls. I need to know how to use the urls with my functions or how can I use classes in urls. I'm working with mysql database and now the only way I know how to use sorting code or anything is by going to another page, this is not an efficent way. So what I need. 1. Give me examples and explain of functions of php, used in html, with mysql and how EXACUTLY do you call them from web browser. 2. Give me examples and explain how to use php Classes in urls or how it works, I don't know?? 3. I need a fucntion that I can use to active on click, php function, how do I use onclick features, not a button, but just a link, I need that function to activate on the click of the link. 4. how to work with variables in url, when calling them. 5. Also, how to use PHP_Self in the function so that after the function is done it'll update with the sorted records. I use mysql queries Order By ..., I need this in the function but to be able to sort and refresh the page sorted. so in other words, I need examples of the above, easy to understand examples with lots of comments and I need you to be on chat if I have any questions. And I need to get the full big picture of how the php works with urls. This shouldn't be a 10min job for a pro plus answering maybe few of my questions. 1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done. 2) Deliverables must be in ready-to-run condition, as follows (depending on the nature of the deliverables): a) For web sites or other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment. b) For all others including desktop software or software the buyer intends to distribute: A software installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request. 3) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site per the coder's Seller Legal Agreement). Php examples with lots of comments and how to call them from url
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00415.warc.gz
CC-MAIN-2020-16
2,472
18
https://autoaudi.deviantart.com/
code
One of my new year's resolution was not to waste time napping but I'm already breaking it in two week. Great. Also, School isn't helping the wastage of time. I have seven days a week, take out five days for my duty as a student and I got two left. On Saturdays I slack my butt off because going out of my room for five days kills my energy level. I still love playing badminton not gonna stop but this means even less time for SFM on Sunday. I am apparently also interested in composing music, and learning spanish, also applying for jobs to get a custom pc build. Juggling all these is killing my motivation for animating and modelling. I learnt how to make clothes, that's cool but modelling and animating takes time, I can't just spend one whole day on it. No. Therefore, I want to make a monthly thing. Maybe this month, I will focus on my modelling projects, try to finish them all during my free time. Then next month, I will focus only on animating. Whereas, the other small interests I have like music, and spanish and jobs can be done on Friday evening before I sleep. I test this 'style' for a few months and if I can work faster this way, yay. If not, I'll not be lazy and make a freaking schedule. Note: I will update this journal randomly and also give status posts as random updates too, and I will notify you guys about the changes in this journal at least once a month :^)SFM Posters Nothing (I want to get something done though since I have Core)SFM Animations --Drawing Storyboard (Two mur weeks of waiting for the damn tablet at most) -Male and female uniforms --Working on female now 45%-ish -Update the hexing tutorial (kill me) --Make a video (writing script) --Stash the images this time (just do it) I will edit the list in the future, and oh don't mind the names and my drunkness
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00178-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
1,804
11
https://github.com/w3c/csswg-drafts/issues/2848
code
Join GitHub today [css-box-3] child-contain: border-box; to resolve child margins at parents edge #2848 Countless times did I hit the same issue: I want to give child elements a margin all around them so they always keep distance from eachother. But I cant predict which children will border their partent containers edges, so I have no easy way of eliminating the margin where children are side to side with their parent. Yes I could give the parent a negative margin and hide overflow, but that will only work if all children have the same margin, otherwise I pull content out of the partent too far or not far enough. I could try to mess around with a calculated with, but that again often requires me to know with element will be dispalyed where, which especially on responsive designs I often can not predict. Could we look into one simple property just for this problem. meaning contain any child by its borders, but let the margins overflow. Default would me contain margin-box. And when we are add it it makes sense to consider contain padding-box and contain content-box; Could we draft this? I originally considered calling this property contain: , but it seems that same name has already been applied to another property in candidate release, where the name is rather vaguely related to what that property does. Too bad. child-contain will do, as it clearly indicates what is being contained, followed by a value of how it is being contained. ...usually covers all cases for me. Ah, Flexbox (and Grid) have this already solved, then. If you use the (Note that gaps aren't implemented for Flexbox yet, but they will be.) Does this solve your problem? It might solve my problem, however would it not make more semantic sense to have a property which may be applied on any element? not sure if When gap is introduced, the issue has been addressed for flex boxes, but why not introduce a more general property which may apply on all relatively or statically positioned elements? Because other layout systems have different constraints, and the problems are different. ^_^ In particular, margin collapsing and floats end up making it very complex to add the equivalent of 'gap' to block layout, much more so than it is in Flexbox or Grid. Note that, for example, Furthermore, the use cases of Block layout (laying out text, generally) don't really call for that type of control over separation between children; if that's the sort of thing you need to be doing, you probably want to switch to a layout mode more specialized for your problem, like Flexbox. Your use-case can be done in Block layout by just setting top/bottom margins on each child, and then setting the top margin to 0 on
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00093.warc.gz
CC-MAIN-2019-30
2,693
18
https://blog.gravypower.net/2017/12/04/test-first-drop-bear/
code
I love TDD, but there could be a Drop Bare waiting for you. The whole Red Green Refactor cycle is great but, can it be taken too far? When I started coding Test First I had a hard time changing my mindset, for most of my programming life I just let the syntax flow out my fingers, then running it for validation. On top of the mindset difficulties, there seemed to be a lot of tedious code that I needed to write, back then I used TDD to define things like properties, their getters and setters on POCO objects in an application, I now see this as just wasted effort. Lately, I have been asking my self, should I drive the definition of the Composition Root with tests? To answer this I needed to ask myself another question, Would I use TDD to define a configuration file? In the end, the Composition Root is (mostly) configuration. Personally, I don't see any advantage in defining the composition root test first for the same reason that I don't see any advantage in defining a configuration file test first, configuration only really changes which logical execution pathway will be used in your application at runtime, and testing these logical pathways is the important bit. If there is some logic in registering types, for example registering plugins when classes implemented IPlugin, the logic that found these types would be best built using TDD. At first, I thought this would be an exception to my answer of not using TDD when defining a Composition Root, but it's not, this logic is not configuration, its output is used to configure what plugins are available, not the logic its self. So how then do you verify that your Composition Root is correct? Well, that is something that would fall into the realm of integration testing. Your DI container could also help to guard that things have been wired up correctly. I rely on the fact that my DI container of choice (let's face it always Simple Injector :) works as per its specification and on integration tests, reserving TDD for building logic. At some point down the stack, you need to start to trust things because you can't see electrons :).
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100540.62/warc/CC-MAIN-20231205010358-20231205040358-00121.warc.gz
CC-MAIN-2023-50
2,107
4
https://www.stat.math.ethz.ch/pipermail/r-help/2020-November/469450.html
code
[R] [EXT] Re: Inappropriate color name M@Roo@ @end|ng |rom |1-out@ourc|ng@eu Fri Nov 20 13:59:23 CET 2020 > Remember that github stopped using the term "master" to describe the main branch of a repository for example. Github is some sort of national language institute, with a board of literary, sociology, psychology professors? Afaik is github owned by Microsoft, and Microsoft is known to be an offender of peoples rights. Who the @#$@#$ cares what they do? More information about the R-help
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663048462.97/warc/CC-MAIN-20220529072915-20220529102915-00550.warc.gz
CC-MAIN-2022-21
494
10
https://www.exoplatform.com/blog/accessibility-on-exo-platform/
code
In today’s modern digital workplace, the ease with which users can interact with their digital environment is key to guarantee a successful digital employee experience (DEX). Generally speaking, most businesses focus their efforts on improving both UX and UI of their solutions to help end users effectively navigate their applications, find what they are looking for and get things done. For most of us, browsing our digital workplace or any other application is easy as they are primarily designed to fit our needs and make our experience better. However, this is not always the case for over 15% of the world’s population or roughly one billion people who experience some sort of disability. For this reason, we are committed to provide an inclusive and accessible platform for everyone, regardless of their abilities, disabilities and the severity of the latter. Additionally, we strive to support customers in reaching their accessibility targets, in particular regarding compliance with the latest accessibility standards and guidelines. In this blog post, we are going to walkthrough our approach to software accessibility, the standards we seek to comply with and our plans for both short and long term future. But first, let’s define software accessibility. In general, software or computer accessibility refers to making a system accessible for people with disabilities. As our dependence on technology grows further by the day, accessibility has become an absolute must for businesses in order to provide all users with a similar user experience and make the use of both hardware and software less challenging. There are a variety of international standards and guidelines (such as WCAG, ATAG, RGAA, …) that define a list of criteria to assess the conformity of web content and pages. Being partially or fully compliant with some or all of these standards guarantees better accessibility, usability and an enhanced user experience for all end users. Our approach to software accessibility As a web-centric solution, we seek to comply with the Web Content Accessibility Guidelines (WCAG) , which represents a set of guidelines and recommendations to help make web content accessible for people with disabilities. Additionally, we look to make our digital workplace compliant with Section 508 of the Rehabilitation Act, amended by the Workforce Investment Act of 1998. Section 508 requires that federal agencies develop, build and use information and communications technology such as websites, portals and digital workplace solutions that can be accessible for people with disabilities whether or not they work for the federal government. Our initial focus has been placed on assessing overall navigation and conformity of content oriented pages and applications within eXo Platform. Here, the aim is to allow users with learning disabilities (dyslexia), visual or hearing impairments, to easily locate, access and consume various types of information within the activity stream and collaborative spaces such as news articles, wikis, chat messages and documents. To guarantee a smooth and effective process, a team has been assigned the tasks of performing internal audits, assisting eXo clients with accessibility projects and gathering feedback from end users. We have decided to put together a diverse and multidisciplinary team composed of software engineers, analysts, designers and testers to alter processes at all stages from product ideation, tooling and design to software development and testing. Going forward, the objective is to periodically assess the accessibility score of eXo Platform’s main existing applications and perform the right adjustments to achieve higher scores. In terms of product development, we are fully committed to developing a digital workplace that is accessible by design. To achieve this, different teams will be provided with a continual accessibility training to be aware of the latest trends, requirements and best practices. Additionally, feature teams are currently working on Including people with disabilities as design personas while quality assurance teams are prepared to perform usability tests with participants who experience some sort of disabilities and impairments. Last but not least, we plan to widen the accessibility coverage to more collaborative and interactive workflows already available in the product as well as more usage scenarios, for example content authoring thanks to ATAG guidelines. Achieving accessibility is a continuous process that involves our internal teams, clients and community members. This is why we rely on your feedback and suggestions to help us pinpoint accessibility limitations and work on potential improvements. For further information, don’t hesitate to consult our accessibility statement, if you encounter any issues or drawbacks, please make sure to contact us via our website or open community “eXo Tribe”.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656869.87/warc/CC-MAIN-20230609233952-20230610023952-00437.warc.gz
CC-MAIN-2023-23
4,930
19
https://pcminecraft-mods.com/survive-mini-planets-mcpe-bedrock-map-1-2-6-1-2-5/
code
On this map you will find a new way to survive in the style of Skyblock. There are many different planets, asteroids and moons here. You will have to survive on the planet and explore other celestial objects, where you can find food and items necessary for survival. How to install Survive on the Mini-Planets - Download .mcworld - Just open the file and the game itself will install all the necessary files - Launch Minecraft and find a map in the list of worlds - Run the map and enjoy it!
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00484.warc.gz
CC-MAIN-2022-05
491
6
https://help.viewpoint.com/en/viewpoint-hr-management-for-vista/viewpoint-hr-management-for-vista/hr-management-for-vista/expenses/post-expenses-to-a-batch
code
Post Expenses to a Batch The Expense Admin moves expense items from the Expense module to Vista via Payroll or Accounts Payable. - Select . - Use the filters at the top of the page to gather the expense items you would like to process. To process reimbursement or expense items to be paid back to the employee, filter to Reimbursement. Note: If you are processing via Accounts Payable, each user will need to have a Vendor column entry on the User Access page. To process expense items linked to a credit card transaction filter to Credit Note: All credit card transactions are moved to an AP Transcation batch. - To view line detail and attachments, select the paperclip icon to the far right of the line. - If you need to modify an expense item, select the Actions button, and then select Edit for that line. When you are ready to move expense items to Vista, select the check box for those items in the grid, and then select the Move to Batch button. Note: If an expense is missing coding, the check box that allows you to select that line will not be available. In the Move to Batch pop-up window, make the appropriate selections: - If you are processing via Accounts Payable, enter the Batch Month, and then select Move to AP Batch. This will move the expense items into an AP Transaction Batch in Vista. You must continue the processing of this batch in Vista for the user or credit card to be paid. - If you are processing via Payroll, select a pay period and a pay sequence. Optionally, you can select an Earn Code other than what is currently specified on the expense line. Then select Move to PR Batch. You must continue the processing of this batch in Vista for the user to be paid. After Expense lines have been moved, they will show with a status Added to - The Batch Month and Batch ID display in the appropriate columns for items added to a batch. - If you need to edit, delete, or reset approvers for expense items that have been added to a batch, you must first clear the batch status ( ). - If an expense line was added to a batch but not processed, select the More button in the upper right of the page, and then select Reprocess. Typically, this feature is used when you accidentally cancel a batch prior to finalizing it.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00178.warc.gz
CC-MAIN-2022-40
2,242
25
http://continuouswave.com/ubb/Forum1/HTML/002878.html
code
Moderated Discussion Areas ContinuousWave: The Whaler GAM or General Area A stupid question: is a 1990 Whaler a classic or post-classic? |Author||Topic: A stupid question: is a 1990 Whaler a classic or post-classic?| posted 06-24-2002 05:32 PM ET (US) Greetings all! I have been a lurker on the site for some time, and it's great...but I have one burning question - which UI know is a stupid one, by the way, but I'm curious. Where exactly is the Classic/Post-Classic dividing line? Made after 1990 could mean model years after 1990, or including. I have a 1990 Montauk (love the boat!!) and I'm in the grey zone! I don't know where I belong! Do I have an old-timer, or one of the new-fangled versions? ;-) BTW: I did try to find the answer to this question in the FAQ's! posted 06-24-2002 06:23 PM ET (US) 1990 Montauk is certainly a classic. There is sort of a fuzzy line, but it has to do with the introduction date of the boat/design rather than the date of manufacture. 1990 Montauk is the same hull as a 1976 plus montauk. The console shape changed in the mid 80's and the teak started to disappear about the time of your boat. But right up to 2002, its still a classic Montauk. posted 06-24-2002 07:33 PM ET (US) Thanks for the reply. Mystery solved. No matter what the year, these are great boats. posted 06-24-2002 08:34 PM ET (US) It's a Classic, it was made from 1976 to 2002 without changing the hull. Regards, jay posted 06-24-2002 09:06 PM ET (US) Soon to be the proud owner of what I thought was a classic ... 2003 Montauk 170. I see lots of threads here on the new Montauk ... I never see Montauk mentioned on the Post Classic forum ... Will I be banned from here forever once I take delivery? posted 06-24-2002 09:19 PM ET (US) The date of 1990 as a divide between the "classic" and "post-classic" era should be used in reference to the first appearance of the design, not the date of manufacture. In the case here, a Montauk, the boat's hull was designed in the 1960's and refined in the 1970's. That your boat was molded in the 1990's does not affect its classic designation. Some enforce a more strict standard, that the hull must be of the original tri-hull, rounded center hull design. This definition removes even boats like the 18-Outrage from "classic" status. But I find that too narrow a delimiter. In most cases, pre- or post-1990 works rather well. It was about that time there were significant changes in the company's ownership, chief designer, factory location, and use of wood trim and other wood components. Throughout its history Boston Whaler has built well-construted boats with premium materials. posted 06-30-2002 09:41 PM ET (US) My cut-off would be if it was made in Mass. or Florida, even though for awhile they looked the same, just my reference. Jack. posted 06-30-2002 10:42 PM ET (US) Well, in my fuddled old mind a classic whaler is one designed by Dougherty or his predecessors, whether it was built in MA or FL, in 1969 or 1999. Looking at a classic Whaler is like sitting in a Mercedes Benz. There is a familiarity that cannot be counterfieted. If it happens to have teak or mahogany brightwork, that is even better. Red sky at night. . . posted 07-01-2002 12:00 AM ET (US) With so many definitions out there, it is absolutely amazing that more members are not confused. As Taylor mentions - the definition is a bit fuzzy - or perhaps like a bowl of jello - always moving. But, as I have mentioned before, they are all Whalers - and people will hopefully realize that there are advantages and disadvantages to all of the designs. ----- Jerry/Idaho posted 07-01-2002 11:02 AM ET (US) Maybe it's like fuzzy jello. posted 07-01-2002 06:39 PM ET (US) Jerry, very well said!! Jack. posted 07-02-2002 10:38 AM ET (US) If one is really a purist the classics ended with the designs influenced and defined by Dick Fisher which means the last ones built in the 'Outrage' and 'Revenge' lines were the slab sides discontinued after Dougherty introduced the 'deep' V hulls even though these did keep to a degree some of the old pedigree --- The various 16 and 17's, 15's, 11's, and 9's and of course the 13's would be included even when slightly modified whether made in the 60's, 70's 80's 90's and this new century! This definition though contrary to many members is an easy line to define for what is truly the Classic Whaler product line --- As JimH and Jerry said though Whalers are all Whalers of high quality when ever or by whomever they were built even though some might be classed as duds they never the less were high quality duds --- chuckle posted 07-02-2002 11:16 AM ET (US) Blackeagle - you might have a good point on the subject of the definition - as in my mind, fuzzy jello is a bit old, moldy, possibly decayed, smelly, et.al. and should be thrown in the garbage. I post this message somewhat in violation of my own criterion - as it is not significantly meaningful, informative or objective. My apologies ----- Jerry/Idaho Purchase our Licensed Version- which adds many more features! © Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 2000.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219691.59/warc/CC-MAIN-20200924163714-20200924193714-00624.warc.gz
CC-MAIN-2020-40
5,108
51
https://lms.2gtraining.in/course/index.php?categoryid=4
code
This Course Group is used for Grouping activites in Script based automation - Teacher: Lavanya .k Visual Basic for Applications is a computer programming language developed and owned by Microsoft. With VBA you can create macros to automate repetitive word- and data-processing functions, and generate custom forms, graphs, and reports. VBA functions within MS Office applications; it is not a stand-alone product.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.26/warc/CC-MAIN-20240227121318-20240227151318-00692.warc.gz
CC-MAIN-2024-10
413
3
http://themefork.com/php-scripts/ninja-dashboards-multipurpose-responsive-database-abstractions/
code
Create beautiful #responsive dashboards using a single #json string. Ninja Dashboards is a very easy to use multipurpose responsive #dashboard #php script. Dashboards can be made from a library that includes the complete Highcharts (www.highcharts.com) set. This library is extensible to include other types of #charts (for instance, Highstock, Highmaps, D3 js which are coming Soon!) Dashboards are themeable using dashboardJSON. The script also includes a code generator that automatically generates PHP scripts to render dashboards standalone.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247526282.78/warc/CC-MAIN-20190222200334-20190222222334-00558.warc.gz
CC-MAIN-2019-09
546
1
http://www.devx.com/security/Article/37502/0/page/2
code
A Threat Modeling Walkthrough The rest of this article walks you through the threat modeling process for the sample CD Music Library web application, following the five-step process listed in the previous section. is an item of value, an item security efforts are designed to protect. It is usually the destruction or acquisition of assets that drives malicious intent. Obviously, assets have varying values. A collection of credit card numbers is a high-value asset, while a database that contains candy store inventory is probably a lower-value asset. The higher the asset value, the more effort an adversary will expend to gain access to it; therefore, the more resources it costs to protect. Consider these factors when evaluating an asset: - Confidentiality: If compromised, does the asset present a threat of confidentiality? An example would be a database that contains client tax identification or credit card information. If an adversary were to gain access to this information, the consequences could be devastating—not only for the holders of this data but also for their clients. - Integrity: If compromised, does the asset present a threat of integrity? An example would be data that provides information in which the client is billed or even data that provides information in which a company makes strategic decisions. - Availability: If compromised, does the asset present a threat of availability? An example would be a database server that contains information in which employees are granted access to specific rooms in a building. If the server was not available to determine that the appropriate employees were allowed into the manufacturing shop, there would be a loss of untold volumes of income. Applying the asset-definition process to the sample music CD library system resulted in identification of the assets shown in Table 1. Table 1. Music CD Library Assets: The table shows the reasoning behind the sample system asset identification on three measures. ||Data contains identification code as well as contact information and address information. ||Data is used to identify the client within the system as well as grant access to the system. ||If data were unavailable, the member could not login, view the music CD library, or use any feature of the system. ||Music CD Inventory Data ||Data is used to identify the music CDs that are potentially available for lending. ||If data were unavailable, members could not request a specific music CD. ||Lending Status Data ||Data is used to determine whether a music CD is in the warehouse or in the possession of a member. ||If data were unavailable, the system could not determine the location of a specific music CD. ||Late Fee/Lost CD Payment History ||Data contains payment method information such as checking account and/or credit card account numbers. ||Data is used to determine whether a member has a late/lost fee and whether it has been paid. ||If data were unavailable, payment could not be made. User entities are the entities that legitimately interact with a system. These entities could be actual end users such as a system administrators, data entry users, and anonymous users. In addition, database connections used to interface with other systems are considered to be user entities. Table 2 shows the various entities identified for the sample system. Trust Levels and Boundaries Trust levels define the minimal access granted to a user entity within a system. For example, a system administrator role may have a trust level that allows them to modify files in a certain directory on a file server, while another user entity may be restricted from modifying files. Trust boundaries define the location where the trust level changes for a user entity. An example of a trust boundary might be an incoming data validation subsystem. After incoming data has passed validation, the system can elevate its trust level and store it in the database. Table 2 shows the user entities and trust levels for the sample system. Table 2. Sample Music CD Library System Trust Levels: The threat modeling process identified these user entities and trust levels. ||Accesses the system to perform maintenance activities, setting modifications and issue resolution. ||Full access to all features and settings of the system. ||Manages the library inventory, check-in, check-out, and receives payment of late/lost fees. ||Access to member data, music CD library data, lending status data, and payment history data. ||Browses inventory and requests CDs for lending. Also views/maintains personal ||Read-only access to music CD library data and payment history data. Can modify rights to their specific member data. ||Accesses the system to review membership policies and sign up to become a member. ||Can only submit a membership request, view membership policy screen, and access member login screen. Input points are points where user entities and data enter a system. Output points are points where user entities and data exit a system. While you may not need to define trust boundaries for all input/output points, defining them during the threat modeling process is beneficial because it defines the scope of the system. All activity that occurs beyond these points may be addressed by a separate threat modeling process. An example of an input point would be a user entity that gains access to a system's authentication screen through a web browser. The authentication screen is where the system will learn the user entity's identity and grant the appropriate trust levels to the user entity—which by definition is a trust boundary. Note that the security of the web browser itself is beyond the control of the system. An example of an output point would be an export process for a database, such as SQL Server Integration Services. The export may generate a text file containing client data in a directory on the file server for consumption by another system. Because the text file is located outside of the scope of the system being modeled it is considered an output point. Table 3 lists the input/output points identified for the sample system. Table 3. Input/Output Points: The threat modeling process identified these input/output points for the sample music CD library system. ||Input / Output ||Web browser used to gain access to the features of the application. ||System Administrator tools to gain access to the system features. For example: SQL Server Management Studio ||Printed list of music CD requests physically pulled from the warehouse. ||Printed receipt given to member when music CD is picked up. Identifying the user entities, trust levels, boundaries, and input/output points clearly defines access to and use of the system's assets. Use Case Scenarios Use case scenarios are often presented as a part of the development process of a system. They depict the context in which the system is to be used and how end users interact with the system. In threat modeling, use cases are valuable for testing vulnerability mitigation as well as identifying possible avenues for system security penetration. The documentation of the system's input and output point can certainly aid in the development and authoring of use case scenarios. Table 4 lists some use case scenarios for the sample system. Table 4. Use Case Scenarios: Use case scenarios aid in testing vulnerability mitigation and help identify possible avenues for system security penetration. ||An anonymous user accesses the system through a web browser. The user does not have a member id. After reviewing the membership policies, the user accesses the member sign-up screen, populating all required fields and submitting the information. At that point, the user exits the system to obtain a member ID sent by the system to the user-specified e-mail address. ||An anonymous user accesses the system through a web browser. The user, having a member ID, accesses the login screen and enters the member id and password. The login is successful. The user navigates to the music CD library and enters a partial band or artist name in the search textbox and clicks the Submit button. The system finds CDs that contain the entered value in the band or artist's name, and presents those to the user.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00541-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
8,244
63
https://patents.stackexchange.com/questions/12761/how-to-change-the-order-of-inventors-after-a-patent-is-granted
code
Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Is it possible to change the listed order of inventors after a utility patent application is granted by USPTO? And if so, how could I go about doing that? Required, but never shown 3 years, 11 months ago
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532929.54/warc/CC-MAIN-20190421215917-20190422001917-00506.warc.gz
CC-MAIN-2019-18
400
4
https://www.mail-archive.com/[email protected]/msg04892.html
code
Le 10/09/2011 11:53 AM, Stefano a écrit : > The full changelog: > * Completely rewritten interface > * New Categories page (a buttons table) > * The Preferences dialog own the about and the software poperties button > * Solved a lot of bugs (no more critical errors from glib or pango) > * And, obviously, Gtk3 Great :) Good to see some progress on it. Btw, I only have an empty list of package, when I click on any categories. I have also a bunch of : Gdk-CRITICAL **: gdk_window_get_pointer: assertion `GDK_IS_WINDOW (window)' failed But terminal doesn't give more information. Maybe it's a good time to implement a logging facility ? ;) Regards, Julien Lavergne _______________________________________________ Mailing list: https://launchpad.net/~lubuntu-desktop Post to : [email protected] Unsubscribe : https://launchpad.net/~lubuntu-desktop More help : https://help.launchpad.net/ListHelp
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652161.52/warc/CC-MAIN-20230605185809-20230605215809-00525.warc.gz
CC-MAIN-2023-23
908
2
https://www.internetnews.com/software/oracle-solaris-10-8-11-released/
code
As Oracle continues to prepare for a final release of Solaris 11, the Solaris 10 Unix operating system is getting another update. Oracle released Solaris 10 8/11 this week providing performance improvements and new hardware support. The Solaris 10 operating system first debuted in 2004 and has been updated on a regular basis ever since. Since its inception, one of the big features in Solaris 10 has been ZFS (Zettabyte File System) which has also improved over the years. The new Solaris 10 8/11 enables enterprises to run ZFS as a root filesystem across their Solaris 10 deployments. ZFS is a 128-bit file system that provides advanced data scalability and recovery options, including “snapshotting” — creating a space-efficient record of a previous system.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00222.warc.gz
CC-MAIN-2022-21
767
3
https://www.eiga-yokai.jp/tag/%E7%94%9F%E5%AD%98%E6%88%A6%E7%95%A5
code
I’m just uploading this to JewTube. I didn’t make it! Anime Name: Mawaru Penguindrum Original Creator: firexq Original Title: You Should Install Linux Original Video: http://bit.ly/srMzjP Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00385.warc.gz
CC-MAIN-2022-49
551
6
https://community.onsen.io/topic/694/ons-bottom-toolbar-covers-elements
code
Ons-Bottom-Toolbar covers elements In the below photo (before and after focus), you will see how when you have form elements and the user places focus on a bottom element, the keyboard slides up and the toolbar covers the elements. I had a different issue when I was using custom footer CSS and in general the toolbar resolves those issues. However, I would really like it, if it would just push the elements up. I have yet figured out a way to debug this because the preview tools don’t have a false keyboard where you can play with the CSS on this. Any thoughts?
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00078.warc.gz
CC-MAIN-2022-21
566
2
https://issues.apache.org/jira/browse/FINERACT-26
code
As a Implementation Specialist, I wish to setup a logo (for the organization) which will be displayed before the Mifos logo As a first step : there is no user interface needed for uploading the logo. As a second step : we can provide user interface for logo-upload Logo can be different for different tenants (in a multi-tenant setup). Logo should be same size as Mifos X logo - to maintain aesthetics of the header band. Part 2 ] While we are at it, we should probably also allow themes to be picked on on a per tenant basis
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00364.warc.gz
CC-MAIN-2021-17
525
6
http://art-martem.com/work-from-home-php-projects.php
code
I recommend this site to anyone who is just starting out and wants to get some experience how can i become rich from nothing their belts. The purpose of Railway Reservation System is a software application which provides the train timing details, reservation,billing and cancellation. Contactable Email, Phone, Skype are the three main methods most clients will use to contact you. PHP Templates. Buy on Amazon. I looked through the job offerings and decided to join. Current PHP Jobs Of course, the issue here is you can't really quantify how much time you'll spend on a single bug fix. I already had remote working experience and enjoy working from my home. Our main mission is to help out programmers and coders, students and learners in general, with relevant resources and materials in the field of computer programming. Assembling products work from home found a great new job in about six weeks with better pay, benefits, and vacation all while being able work remotely. Open parental leave includes maternity, paternity, and adoption. The listing was so robust, I had two friends sign one click binary options for the subscription as well. Freelance and Work-from-Home Projects PHP or hypertext preprocessor is a powerful scripting language for web sites, and many of the work-at-home moms and other freelancers who are proficient in this kind of programming can find great PHP freelance work from home start free that can lead to lucrative remote assignments. What is OpenClinic project? I was skeptical about trying this website but gave it a go. - This can be the beginning of a career in web design. - This means that the level of skill required also increases the amount you can usually earn from those types of jobs. - Stock options revenue binary option platform comparison - Your team will get together one to three times per year in locations around the globe. - Ekattor School Management System is the most complete and versatile school management system on envato market. The researcher therefore suggests that for further research, the following can be researched on. We provides many types of java software projects to be developed as the final year and semester college project for students. Jorden L. Hospital Management System Mini-Project. NET apps, then add a few variables, mixins and nested rules. We Work Remotely The first company I applied to hired me. Communication One of the key aspects of being a freelancer is being able to communicate well with others. This can be use to detect the number of words for your documentation project. I have already recommended it to several friends, and I will use it again if I am seeking employment down the road. Work With Us I am so grateful that I found this site because I landed an awesome job opportunity! What makes a good freelance PHP developer? Contracted arrangements can end up assembling products work from home money and providing benefits for both parties, and savvy freelancers who add PHP to their skill set can expect to see renewed interest from clients looking for "one stop shopping" for web design. Modules under this software are appointment, patient details, treatment details, dosage, creditors details, billing, calculation of bills, reports and statistics. Manage your project timelines and milestones, get a bunch of stats and graphs for your analysis and track time spent on issues with our Project Management tool set. It's a great place to find telecommuting jobs. Freelance PHP programmers have to know how to set rates that will earn them a living without charging so much that clients decide to go elsewhere. Well, one good way Fiverr work from home php projects rid of inactive users is by setting Gigs to "pause" mode. Another way to get work in freelance PHP is to build a small business website of your own and offer PHP programming services directly to the business community. Here backers can connect privately with project creator through private messaging system so that backers shares their query and be updated about the project. When you start out as freelance php developer, you'll find getting clients can be quite difficult. This is an example of a hospital domain model diagram. Need Blood. If you're using a site which lets clients leaves reviews, you can also refer your potential clients to these as oco forex ea of your credibility. Great Options to Try HospitalRun. Recent Forum posts. I was part of a community that built PHP games and there was a small subsection that needed programmers. Ecommerce Website Project - Source Code - Free Download They have to wait until they are not provided with their library card and token. I hope I am all set in my job and never have to search again, but if I do, this will be my first stop. Interactive gui and the ability to manage various hotel bookings and rooms from an android work from home php projects makes this hotel management system very flexible and convenient. Software Engineer I really enjoyed and benefited from using FlexJobs. Remote Jobs: Design, Programming, Rails, Executive, Marketing, Copywriting, and more. Most of the work I've gotten has been through repeat-business rather than through new clients. The regular hotel management system project entirely in an android app. It's another way for you to bid on jobs that you feel you might be able to take on. Thanks, FlexJobs! Management of profile: If you provide a good service, your clients are likely to keep coming back and soon you'll be in a position where you have to turn down work. The FlexJobs application process and profile made it easy to apply for jobs. If a client gets frustrated with the lack of your response, you definitely will sour the whole project. From this projects you can learn Business level projects. Of course, you'll be competing against programmers who are happy to work for a lower price. Need to hire a freelancer for a job? I was able to find exactly the job that I was hoping to find if not better! I also received relevant jobs forex helsingborg telefonnummer your emails, which made it easy. But even making some work samples, demos and screenshots of previous work can really help. School management system project source code in phpschool management project forex brasilianska real in programming language php. - Get my forex when to invest in bitcoin cash china forex rate - You'll be able to find lots more work knowing these frameworks. - I have already recommended FlexJobs to a couple of people. - Work With Us — Automattic - This way you know exactly what has been documented and outlined as needing doing. - Forex broker business plan globe forex kolkata fidelity brokerage account trade options It's worth looking into getting some skills in PHP if you want to be able to offer web design as part of your work at home business. I also recommend having some knowledge of web servers. Net — Source Code Downloads: Displaying search result for: If, when all the work has been completed, the client requests more work that wasn't in the requirements document, you're free to either reject it or charge them extra with their agreement, obviously! Hire php freelancers And Find php freelance Jobs | art-martem.com This Project provides more accurate and efficient way to take exam. We cover all work from home php projects of company travel, so dust off that passport! Tim S. So the site only ever offers active Gigs, which is great for clients and freelancers. Brad B. It's great work because I already know the product framework inside and out and so I can confidently analyse the work that need's doing and how to do it. PHP programmers can create submission forms, link to MySQL databases, or provide handy accounting features built into a web page. How to create an ASP. The tracking system for positions I applied for was also a valuable resource. Two of the three jobs I applied for ended up in interviews Make sure you offer them an easy way to contact forex brasilianska real. The most cost effective way of handling all Hospital Patient management system process. Source code project can be found at GitHub. Main objective of this project is to design simple software for organizations work from home php projects managing various types of works related to employees. Because of how cheap the services usually are, the type of customer you get is likely to be very inexperienced too, which can be a good and bad thing. Follow these steps to use the application: You also don't have to pay any fees on the amount you earn since there's no service charge. How To Get Freelance PHP Developer Jobs Thank you, FlexJobs! Before we look at ways to seek out jobs, what can you do to make yourself stand out from the crowd? Create a new PHP project work from home php projects "wishlist". This project will show you how to build a simple website using the PHP programming language. By offering a solution which is easy to install, run and maintain. Currently, OpenClinic has the following options: Do you have a GitHub project OpenClinic GA is an open source integrated hospital information management system covering management of administrative, financial, clinical, lab, x-ray, pharmacy, meals distribution and other data.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00333.warc.gz
CC-MAIN-2019-26
9,210
49
https://www.kellytechno.com/blog/education/how-machine-learning-ai-are-empowering-data-science/
code
Last Updated on by How Machine Learning & AI Are Empowering Data Science? Both Big Data & Data Science have become the major revolutionary innovations of the 21st century. The use of Big Data & and advanced data analytical models in Data Science have become very crucial for the enterprises business development process. The analytical models in Data Science have proven to be highly effective in extracting the insights from Big Data. One of the major aspects that concern the process of analyzing the insights from Big Data is that 90% of the data that is generated globally remains in an unstructured format & only 10% of the data remains in a structured format. Using traditional Big Data technologies like Hadoop or Spark have proven to be inefficient in relation to extracting value from the unstructured format of Big Data. This is AI & Machine Learning technologies have become crucial. Grasp in-depth knowledge of the analytical applications involving Data Science along with AI/ML technologies with our advanced Data Science Training In Hyderabad program. Now, let’s understand how AI & ML technologies are empowering Data Science. The general applications of Data Science like Data Mining, Data Processing, Data Visualization, etc, can be handled directly with the existing tools & techniques. But when it comes to handling advanced Data Science applications like Predictive Analytics, Fraud Analysis, Demand Forecasting, etc the preferred model needs to be trained extensively with the algorithms in AI & Machine Learning technologies. These algorithms help the models to make accurate predictions from the given data sets. Without training the model with the AI/ML algorithms, it is impossible to make accurate predictions by analyzing the data sets. To better understand the use of AI/ML in Data Science, you can consider the application of Recommender Systems in Netflix. The data that is collected from the users is analyzed with Recommender System in Data Science that helps in generating accurate user based recommendations. Become an all-round professional expert in the Data Science domain with the help of Kelly Technologies advanced Data Science training program.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00563.warc.gz
CC-MAIN-2023-40
2,187
9
https://littlebirdelectronics.com.au/atmel-avr-at89s2051
code
The AT89S2051/S4051 is a low-voltage, high-performance CMOS 8-bit microcontrol-ler with 2K/4K bytes of In-System Programmable (ISP) Flash program memory. The device is manufactured using Atmel’s high-density nonvolatile memory technology and is compatible with the industry-standard MCS-51 instruction set. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the Atmel AT89S2051/S4051 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications. Moreover, the AT89S2051/S4051 is designed to be function compatible with the AT89C2051/C4051 devices, respectively. The AT89S2051/S4051 provides the following standard features: 2K/4K bytes of Flash, 256 bytes of RAM, 15 I/O lines, two 16-bit timer/counters, a six-vector, four-level interrupt architecture, a full duplex enhanced serial port, a precision analog comparator, on-chip and clock circuitry. Hardware support for PWM with 8-bit resolu-tion and 8-bit prescaler is available by reconfiguring the two on-chip timer/counters. In addition, the AT89S2051/S4051 is designed with static logic for operation down to zero frequency and supports two software-selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port and interrupt system to continue functioning. The power-down mode saves the RAM contents but freezes the disabling all other chip functions until the next external interrupt or hardware reset. The on-board Flash program memory is accessible through the ISP serial interface. Holding RST active forces the device into a serial programming interface and allows the program mem-ory to be written to or read from, unless one or more lock bits have been activated. Be The First To Review This Product! Help other Little Bird Company Pty Ltd users shop smarter by writing reviews for products you have purchased.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695066.99/warc/CC-MAIN-20170926051558-20170926071558-00149.warc.gz
CC-MAIN-2017-39
1,910
5
https://pdcl.itch.io/dino-eat
code
A downloadable game Dino Eat is a small game I mde for my little son so he can play something with a friend without having to compete or lose. The game has no objective, as is a prototype, but each dino starts small and grows as he eats. Note: When the timer reaches zero, the game will exit - Creatures will avoid you, but are attracted to food. - Each dinosaur has an influence radius and a damage factor based on it's size. - Each dino starts small and grows as he eats. - Each dinosaur has an advantage: T-Rex screams and paralizes targets, Triceratops can attack without stopping and Raptor has increased speed. - You can select which dino to use at the start of the round. - F1: resets everything to zero (except the time). - PLUS and MINUS: grows/shrinks all dinos. - PAGE BUTTONS: Add/subtracts 60 seconds from time. - F5 or F6: lets player1/2 select a new dino. - F10: switches fullscreen mode. - F12: Disables/enables the timer. The controls of the game are as follows: - Player 1: WASD and IOP for "EAT", "ATTACK", "SCREAM" - Player 2: ARROWS and NUMPAD_4/5/6 for "EAT", "ATTACK", "SCREAM" Any feedback is welcome. Click download now to get access to the following files: Leave a comment Log in with itch.io to leave a comment.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523564.99/warc/CC-MAIN-20200607044626-20200607074626-00285.warc.gz
CC-MAIN-2020-24
1,238
22