url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://ravesearch.com/how-to-create-an-erc-token-without-coding-explained/
|
code
|
The ERC (Ethereum Request for Comments) standard defines a set of rules that any Ethereum token must follow in order to be recognized and utilized by the rest of the platform. The two main benefits to creating an ERC token are as follows: it allows developers to issue their own custom tokens on the Ethereum blockchain and it enables companies that want to utilize Ethereum technology to create their own tokens. This article will give you all the information you need to get started with creating your own ERC token, including how to do it without coding and why you should choose this over another method, such as an ICO (Initial Coin Offering).
What is the difference between a token and a coin?
A number of tokens can be built on an existing chain instead of using a specialized blockchain, saving time and money for developers. At a fundamental level, tokens and coins represent value and enable payment processing in similar ways: tokens and coins can be swapped, as can the reverse, with differences arising in their utility. Tokens are generally used as part of a decentralized app (DApp), whereas most cryptocurrency coins serve as a store of value on the blockchain. The majority of tokens created on existing blockchains can be migrated whenever needs change since tokens are created on existing blockchains.
To develop a coin, users need to copy an entire blockchain. By contrast, to create a token, users need to write a smart contract. Decentralized apps, such as Status and Augur, use tokens for all transactions so the developer can save time and resources. The benefit is that the developer doesn’t need to deploy their own blockchain. You do not need any coding skills, but you will need a wallet to store your token securely.
What can cryptocurrency tokens do?
Cryptocurrency tokens represent any assets that can be owned or controlled such as physical assets such as gold or virtual assets like video game characters. In other words, tokens can also represent things like loyalty points or IOUs. With cryptocurrencies, tokens are often used to fund a new project. The ERC20 token is one of the most popular kinds, created for use on the Ethereum blockchain. With no coding experience necessary, the ERC20 token can be kept in any Ethereum-compatible wallet. Creating an ERC20 token is a quick and easy way to jump into the cryptocurrency world, and it doesn’t require any programming knowledge.
The value of a cryptocurrency token is based on its intended use. The cryptocurrency Ethereum has its own currency called Ether that’s valuable because it can be exchanged for other cryptocurrencies or used to fund applications created with blockchain technology. The Ether coins are stored in individual Ethereum wallets. You don’t need to know how they work, but you can keep them secure and transfer them at any time by using your private key or wallet address. For example, if you create a new project on Ethereum, you can sell tokens that represent shares in your new project and investors will pay for those tokens with ether coins. These ERC20 tokens give investors a way to keep track of their share of ownership in your project without keeping track of individual shares themselves.
Why should I create a token?
You might want to create a token for a number of reasons. Maybe you need to raise money for your project or business. Or maybe you just want to play around with blockchain technology and see what you can do. If you decide to create a token for whatever reason, you should keep a few things in mind. First, decide what kind of token you want to create. Utility tokens are the most common type of token, while security tokens are the most common. Security tokens are investment contracts that give the holder an ownership stake or right to profit in the underlying asset. Utility tokens grant the holder access to something like a product or service. Determining how tokens will be distributed is crucial.
One way to create your token is to use a token creation platform like Ink Protocol or BlockCAT. These platforms are relatively simple and they can be used even if you don’t have any coding experience. The platforms allow you do define variables like how many tokens will be created, how they’ll be distributed, when they’ll be issued and more. Once your contract is created it goes through a process called compilation in which Solidity—the programming language of Ethereum—will check that all of your contracts functions are working properly before deployment on the blockchain. You then deploy your contract by sending it out onto Ethereum mainnet for anyone in the world to use and interact with!
Why is Ethereum considered to be the best platform for token creation?
Ethereum is best for token creation because first of all, it is very easy to create tokens. To work with Ethereum, all you need are a few lines of code. Additionally, Ethereum has a very active and helpful community. There are a number of resources out there if you need it. One thing is that Ethereum is very flexible. First, you can create tokens with various characteristics. Second, Ethereum is decentralized. This means that there is no central governing entity monitoring the Ethereum network. Fifth, Ethereum is un-censorable. The features include: Token cannot be censored or shut down by any government or organization, and is Ethereum is a trustless network. These six reasons will have you believing that Ethereum is the right one for you. Number one, it isn’t reliant on any third party, and number two, it’s stable.
Ethereum isn’t great for creating tokens for a few reasons, first, it can take a lot of time for your tokens to be created on Ethereum. For Ethereum, the average is that tokens are created in around two minutes, but it may take longer to generate one if there is a high volume of transactions. Ethereum also has high transaction fees. There are three benefits. The first is that Ethereum uses Proof-of-Work. The second is that it is very difficult for anyone with less than 51% network power to create any new blocks or make any changes to existing blocks in the blockchain. The third is that there is zero transaction fee on every transaction.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816864.66/warc/CC-MAIN-20240414033458-20240414063458-00262.warc.gz
|
CC-MAIN-2024-18
| 6,209 | 13 |
https://www.richard-banks.org/2007/12/living-with-team-foundation-server.html
|
code
|
Recently I wrote up a short overview on how TFS source control works for a client. I've reproduced it here as it may hopefully help you understand how TFS works and reduce the number of “wierdnesses” that people experience when using TFS.
For most people the normal behaviour when doing a “Get Latest” is one drilled into us through years of Source Safe (ab)use. Right click the solution file, and select Get Latest Version (Recursive) as shown here:
However with TFS we really should be doing it like this (via the Source Control Explorer):
Experienced TFS users will also typically use "Get Specific Version" with the Force Get option turned on.
Similarly when doing a check in through the Visual Studio UI (which is 99.99% of the time) it’s good practice to ALWAYS click the refresh button first to make sure you’re list of pending changes is accurate.
Problem 2 First
Let’s tackle the second issue (the refresh button) first. When files are changed in Visual Studio an event is raised indicating that the file has been checked out in source control and that it should be treated as a pending change. At various times those events don’t get picked up by the pending changes window in VS which will in turn means that the UI doesn’t refresh automatically. This can then result in a check in that misses required files, simply because the UI didn’t show them. (See this forum entry for background info)
By clicking the refresh button, the UI will re-query the status of the files in the workspace and give you the full list of files available for check in.
Note that it doesn’t ask the TFS server for the status of the files, just the workspace. Also if you have made changes outside of visual studio, or performed offline changes you’ll need to do a refresh of the pending status for all files in the workspace using the team foundation power tool (tfpt).
There are other situations where you change a file but it doesn't show as being any different to the latest version in TFS. This is particularly noticeable with solution files and occurs because Visual Studio keeps the solution file in memory and doesn't write out the change unit you have saved it to disk.
There is also an issue with files that are writeable but not checked out. If you edit one of these writeable files in Visual Studio then VS assumes that the file is already checked out and will not check the file out of source control automatically if you start to edit it. This typically happens when editing files offline, or when using a third party program that overrides the readonly flag.
Understanding Workspaces et al
Now back to the first problem. Why should we do a get latest from Source Control Explorer instead of the solution file? The answer relates to the way in which TFS is designed.
Firstly, TFS Version Control uses the concept of workspaces to track file statuses. A workspace is TFS’s view of what files the server thinks you have on your local machine in a specified path (the local folder as shown here) and is treated as a snapshot of the source repository at a given point in time:
This way, when you do a get latest, TFS will only send you the updates it thinks you need, based on the changes made since you last updated your workspace. This is meant to help reduce network traffic and improve performance.
The other thing to understand is that TFS treats a workspace as a snapshot of the source repository and therefore each changeset is an atomic change to a know set of files. It expects that all files in a workspace are as at the same point in time.
In fact, this is the reason why doing a checkout of a single file only marks it as editable and doesn’t perform a get latest. Buck Hodges blog entry explains it better (emphasis mine):
Why doesn't Team Foundation get the latest version of a file on checkout?
I've seen this question come up a few times. Doug Neumann, our PM, wrote a nice explanation in the Team Foundation forum (http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=70231).
It turns out that this is by design, so let me explain the reasoning behind it. When you perform a get operation to populate your workspace with a set of files, you are setting yourself up with a consistent snapshot from source control. Typically, the configuration of source on your system represents a point in time snapshot of files from the repository that are known to work together, and therefore is buildable and testable.
As a developer working in a workspace, you are isolated from the changes being made by other developers. You control when you want to accept changes from other developers by performing a get operation as appropriate. Ideally when you do this, you'll update the entire configuration of source, and not just one or two files. Why? Because changes in one file typically depend on corresponding changes to other files, and you need to ensure that you've still got a consistent snapshot of source that is buildable and testable.
This is why the checkout operation doesn't perform a get latest on the files being checked out. Updating that one file being checked out would violate the consistent snapshot philosophy and could result in a configuration of source that isn't buildable and testable. As an alternative, Team Foundation forces users to perform the get latest operation at some point before they checkin their changes. That's why if you attempt to checkin your changes, and you don't have the latest copy, you'll be prompted with the resolve conflicts dialog.
If you do a get latest in Visual Studio by right clicking the solution file, Visual Studio gets the current list of files references by the solution and requests the latest version of each of those files. It doesn’t do a get latest for the entire workspace. Why? Because Visual Studio is designed to work with a source control provider and not all source control systems work like TFS.
Since files are retrieved individually it also helps explain why a new project added to a solution sometimes won’t appear straight away and why you have to manually do another get latest to get it – and why you may have to do a “force update” as well.
If however you use SCE to do the “get latest” (and you do it from the workspace root) then you are updating your local code base with the entire snapshot, not individual files. In the case of a new project being added, your “get latest” would have retrieved the new solution file AND the new project’s source files, so when the solution reloads all the files will be present and you won’t be missing anything.
Hopefully this makes sense, but if not, please let me know so I can flesh out the confusing bits in more detail.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00502.warc.gz
|
CC-MAIN-2017-43
| 6,696 | 26 |
https://documentation.gravitee.io/am/getting-started/install-and-upgrade-guides/upgrade-guide
|
code
|
Comment on page
4.1 Upgrade Guide
If your upgrade will skip versions: Read the version-specific upgrade notes for each intermediate version. You may be required to perform manual actions as part of the upgrade.
Run scripts on the correct database:
graviteeis not always the default database. Run
show dbsto return your database name.
Starting with AM 4.0, the MongoDB indices are now named using the first letters of the fields that compose the index. This change will allow automatic management of index creation on DocumentDB. Before starting the Management API service, please execute the following script to delete and recreate indices with the correct convention.
Last modified 9d ago
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100172.28/warc/CC-MAIN-20231130062948-20231130092948-00384.warc.gz
|
CC-MAIN-2023-50
| 689 | 8 |
https://www.kurtknows.com/what-is-facebook-messenger-bot-2020/
|
code
|
This video is about what is Facebook messenger bot.
🔥100% FREE TRAINING – How To Start A Profitable Affiliate Marketing Online Business (this is the training that changed everything for me!) – https://www.kurtknows.com/free-training
👇 MORE VIDEOS TO MAKE MONEY FAST FROM HOME 👇
Start Online Business
Make Money Online With Affiliate Marketing
❤️ SUBSCRIBE NOW FOR MORE AWESOME CONTENT ❤️
👋 About This Channel Kurt Knows: I am Kurt and have been an online entrepreneur for several years now, and I created this channel to share and teach others how to start an online business and make money on the internet through different ways such as affiliate marketing, passive income, and many other ways.
🔴 URGENT: You might NOT see my NEW videos UNLESS you
🔔 TURN ON MY NOTIFICATIONS 🔔
How to Make a Facebook Messenger Chatbot? The Opesta facebook messenger chatbot marketing tutorial comes with a free trial and will help you in how to set up facebook messenger chatbot, how to create a facebook messenger chatbot for free without coding and how to build a messenger chatbot as an example.
This is the top facebook messenger bot for business and it has a great tutorial so it is like manychat and you can learn how to make messenger bot with the free trial you will know how to create messenger bot without coding as well as what is messenger bot and how to set up messenger bot with examples.
The chatbot for messenger is a very important marketing tool that you can use for your WordPress website as integration for marketing.
#opesta #opestareview #facebookmessengerbot
Please note that all recommendations & links are affiliate promotions.
This video is for educational and entertainment purposes only. There is no guarantee that you will earn any money using the techniques and ideas mentioned in this video. This is not financial advice. Your level of success in attaining the results claimed in this video will require hard-work, experience, and knowledge. We have taken reasonable steps to ensure that the information on this video is accurate, but we cannot represent that the website(s) mentioned in this video are free from errors. You expressly agree not to rely upon any information contained in this video.
No Earnings Projections, Promises Or Representations. Any earnings or income statements, or any earnings or income examples, are only estimates of what we think you could earn. There is no assurance you will do as well as stated in any examples. If you rely upon any figures provided, you must accept the entire risk of not doing as well as the information provided.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00269.warc.gz
|
CC-MAIN-2021-43
| 2,613 | 16 |
https://community.teamviewer.com/English/discussion/8284/commercial-use-connection-time-out/p350
|
code
|
Why does this keep happening? Teamviewer was great, now Im sick and tired of this usless detection service.
Im using this as a private user, no commercial what so ever. I use it to help family with their computer. Its the second time Ive been blocked lately. How can this be fixed for good, and what am I doing wrong?
Every time I try to connect to their computer it says "A connection could not be established. It seems there might be a licensing problem with your connection partner. ..."
This is an elder family member that I have to remote into often, at least once a week, to help with simple things like changing the zoom on Chrome..., is it possible they might've gotten flagged from all the connections? How do I get it unflagged for commercial use?
I'm able to remote into my other computers fine so it's on their side.
I posted my question and for some reason, it got moved to this thread.
Although there are similar questions in this thread, I don't have, and never have had a commercial account.
An answer to my question would be most appreciated.
It is used for private use to help my mother with her laptop on occasion. Can my account be rest to to reflect this ?
FYI TV has been sold to a headless corporation from another galaxy that couldn't care less about free users.
Though TV built their rep on the early free users, the new overlords haven't a clue or care.
I use TV once very 6 months, maybe, to help out tech challenged f&f. But yet TV thinks I'm commercial.
The detection algorithm is no longer written in code but rather under-cooked mashed potato.
The reset form doesn't work in any browser, all ad blockers etc, disabled.
Too busy getting "sponsoring" F1 cars, fooseball teams, and kitty litter pans.
ack ack ack!!! we come in piece
I have elderly parents and a few elderly relatives that I help maintain their computers when they run into trouble. TeamViewer think I'm using this for a business. Is there anything I can do?
I am a university student and we use TeamViewer to transfer files and operate the instruments online.But I cannot access one of our computers through TeamViewer for some time.
When I try to access it I get a message which is showing in the attached png (the top image).
Both our computers have a free license as we use them for personal use, not commercial purposes.
I open TV and try to connect to one of my family member's computes to help them with something, and it fails with the "Connection Not Established" error, with some blurb about "licensing" in small text at the bottom. Like any rational person, I try a second time for a successful connection, and now get "you established and aborted connections too frequently." The only problem is, I never established a first connection.
Why is this built this way? Can this be shut off?
I would like TV to work, as I know it is capable of doing, every time I use it, without these unpleasent circus antics. You can talk about security and hack attacks or whatever, but if the product doesn't do the job it was created for, what's the point?
I have a media pc in my media room and I use a free license to manage it from my phone usually bit now it’s blocking me. How can I undo this and get back into my account? Thank you
Same thing happens to me (overnight)
I cannot use remote control suddenly. A message says (in Japanese) "cannot connect. there is license issue at partner's side. In order to connect the proper team viewer license is necessary for your or partner's side"
Although, there is no problem at partner's side. And I am a private user. Why does team viewer ask about license?
why can't I connect to another computer it gives me a message -
I am a free user
connecting to this device requires a valid teamviewer license
I have used Teamviewer many years for private use. Remote my fathers laptop and my sisters laptop. No problem.
But now i cant use anymore!!!? Why? i have not change any thing!
Ive treid to logout and delete and make new connections, nothing works.
Can i not use it for private use anymore?
Thanks for the help.
Same problem. Private use. I try to connect to my laptop next to me and I get the same message. Previously, I had a problem that it was limiting my connection in time - now I can't connect at all.
I don't know where to get help and why it happened.
I'm having a similar issue trying to connect to my Mom's computer. I'm getting a message: Connecting to this device requires a valid TeamViewer license... Both of us have individual licenses. Back when we owned our company, I had a commercial license (different email addy) and had no problems connecting to my Mom's machine or our employees... Any ideas?
I try to support my 86 year old father nur TeamViewer says, that He ist an Professional!
What can i do?
I help an elderly man and we have used teamviewer for many years free.
Now I can't connect to his computer and he needs help.
He is not capable of going through the downloading a PDF etc. without my help.
"A connection could not be established. It seems there might be a licensing problem with your connection partner. Connecting to this device requires a valid TeamViewer license for you or your connection."
I'm trying to use te free license to support friends and family, but I always get the error message "There seems to be a license problem with your connection partner...". Both sides are using the latest version of TV (15.35.5). I tried to initiate the connection from either side, but always get the same error.
I used to be an IT pro, but I am retired now. As a pro I used the company license. Could it be that my PC at home is somehow registered for professional use (clearly my pro acct is no longer valid)? How can I start from a clean sheet?
I'm having the same problem. I use TeamViewer to help my parents but getting the same error. We both have up to date TV clients. Me on Windows 11 and my Dad MacOS.
As the description above, when I use mac teamviewer to connect the same account on windows,
it shows : can not establish the connect. It seems to be a problem with your object license
I suddenly cannot connect to my one remote computer. In the bottom it says license may be problem
I am a private individual and I used free teamviewer to remotely monitor a computer. Now I get a message saying COMMERCIAL USE DETECTED. I do not understand your interpretation as mine is a limited and private use. the computer is identified with the code: DESKTOP-PV6DKG7primo and is online. I would appreciate explanations and restoration of functionality. TY Enzo Barbati
[removed per Community Guidelines]
I am utilizing TeamViewer in a 100% personal way.
How can I get help to change my access back so I have access to my laptopn remotely
Vincent L. Graziano
Hi I leave TeamViewer open almost 24/7 with unattended access and I keep getting flagged with commercial usage even though I'll go months and months without actually remotely connecting to it through TeamViewer. Is this just a false flagging issue or is using TeamViewer to periodically update things on my PC while I'm away considered commercial usage? I don't gain anything monetarily, I only gain time as my internet is less than 1Mbps and takes a very long time to update half of anything now. Any answers to this will be greatly appreciated.
I recently bought a new Laptop and would like to access my old one when I need information. Both computers are registered in my account, but EVERY time I try connect, either way, I get an Abort message stating there is a licensing issue with one or both computers. I am not a new user. I have helped a friend with questions on his computer without incident. See the message area at the bottom of the picture. Aladdin is my old Laptop and Lenny is my new one.
I'm trying to connect to another computer in my home and the program says there might be a licensing problem. I have a free user account on both computers. What's up?
I also need help and because I don't have a PAID account am unable to create a ticket and no phone number for them or support email address can be found. I do free help for people and they won't let me connect to anyone at all now! I can't afford a paid for version and why should I pay for one when I do FREE help? That makes no sense. I need my ID reset but can't find out how to do this because I can't find any way to contact them so right now, I say they are useless.
Same here aswell, Free license and logged into both computer at the same time or just the main computer, tried with remote using pwd and ask for access, both ended with same error message. TM version is 15.35.5. Any fixes soon?
I have for several years supported my 91 year old mother-in-law who lives in a care home using a free Teamviewer licence. Now when I try to logon remotely it says that there a licence issue. I can remotely support my 90 year old father who lives in South Africa, so I think my licence is ok.
I have tried uninstalling and reinstalling Teamviewer on my mother-in-law's computer and have granted easy access. I have also tried logging on to teamviewer on her computer with my own logon, but I still get the same error.
Unfortunately the error flashes up for about a second or 2 so I can't get a screen grab.
Hi I keep getting the error message saying the below, have uninstalled and still the same, any advise much appreciated, I'm a free user
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817187.10/warc/CC-MAIN-20240418030928-20240418060928-00528.warc.gz
|
CC-MAIN-2024-18
| 9,344 | 64 |
http://languagelog.ldc.upenn.edu/nll/?p=1951
|
code
|
At the Atlantic, David Shenk mediates an exchange of letters between Mark Blumberg and Nicholas Wade about the appropriateness of calling FOXP2 a "speech gene", about "gene for X" thinking in general, and about the nature of science journalism:
Blumberg: Trumping up FOXP2 as yet another star gene in a series of star genes (the "god" gene, the "depression" gene, the "schizophrenia" gene, etc.) not only sets FOXP2 up for a fall; it also misses an opportunity to educate the public about how complex behavior – including the capacity for language – develops and evolves.
Wade: I'm a little puzzled by your complaint, which seems to me to ignore the special dietary needs of a newspaper's readers and to assume they can be served indigestible fare similar to that in academic journals. [...]
As for missing an opportunity to educate the public, that, with respect, is your job, not mine. Education is the business of schools and universities. The business of newspapers is news.
I'm glad we got that straightened out!
Read the whole exchange between Blumberg and Wade here.
For some background, see the discussion and links in "The hunt for the Hat Gene", 11/15/2009.
And as part of my job of educating the public, let me draw your attention to some scientific news announced in a recent paper by M R Munafò et al., and as far as I know not covered by any newspapers ("Bias in genetic association studies and impact factor", Molecular Psychiatry 14: 119–120, 2009):
Studies reporting correlations between genetic variants and human phenotypes, including disease risk as well as individual differences in quantitative phenotypes such as height, weight or personality, are notorious for the difficulties they face in providing robust evidence. Notably, in many cases an initial finding is followed by a large number of attempts at replication, some positive, some negative. Although there has been debate over the statistical arguments concerning the strength of evidence in association studies, there has been less interest in understanding why it is that some genetic associations generate such large literatures of inconclusive results. We wondered whether one source of the difficulties in the interpretation of genetic association studies might lie with the journal that published the initial finding. Studies published in journals with a high impact factor typically attract considerable attention. However, it is not clear that these studies are necessarily more robust than those published in journals with lower impact factors. [...]
Data were analysed using meta-regression of individual study bias score against journal impact factor. This indicated a significant correlation between impact factor and bias score (R2=+0.13, z=4.27, P=0.00002). Our results are presented graphically in Figure 1. We also note that journals with high impact factors tend to publish studies with high bias scores and small sample sizes (as indicated by the smaller circles in the figure).
Here's Figure 1 and its caption:
Meta-regression of individual study bias score and journal impact factor. Bias score is plotted against the 2006 impact factor of the journal in which the study was published. Meta-regression indicates a positive correlation between journal impact factor and bias score (R2=+0.13, P=0.00002), suggesting that genetic association studies published in journals with a high impact factor are more likely to provide an overestimate of the true effect. Circles, representing individual studies, are proportional to the sample size (that is, accuracy) of the study.
In other words, the more prestigious the journal (as measured by its "impact factor"), the less likely the genetic association studies it publishes are to be replicated.
If I were merely in the business of news or entertainment, I'd observe at this point that the particular FOXP2 study behind the Blumberg/Wade discussion was published in one of the highest-impact-factor journals in the world, Nature, and thus is statistically somewhat more prone to fail to replicate than if it had been published (say) in Prof. Blumberg's journal, Behavioral Neuroscience.
But this would be unfair. Details aside, the paper's conclusion (that the two different amino acids in the human-specific version of FOXP2 cause "differential transcriptional regulation in vitro" of a very large number of other genes) is surely true; and the detailed claims about the genetic networks involved may well turn out to be helpful in understanding how the capacity for language develops and evolves.
However, we can also be fairly confident that calling FOXP2 a "speech gene" — and the whole "gene for X" style of thinking that this exemplifies — will become more and more clearly a source of confusion. In my earlier post, I quoted Simon Fisher (the scientist who first discovered the connection between a FOXP2 mutation and a syndrome that includes some speech-related disabilities):
[T]he deceptive simplicity of finding correlations between genetic and phenotypic variation has led to a common misconception that there exist straightforward linear relationships between specific genes and particular behavioural and/or cognitive outputs. The problem is exacerbated by the adoption of an abstract view of the nature of the gene, without consideration of molecular, developmental or ontogenetic frameworks. […] Genes do not specify behaviours or cognitive processes; they make regulatory factors, signalling molecules, receptors, enzymes, and so on, that interact in highly complex networks, modulated by environmental influences, in order to build and maintain the brain.
At some point, I guess, this will become not merely truth, but also news.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00076-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 5,705 | 18 |
https://www.linksysinfo.org/index.php?threads/wl-cripplied.7490/
|
code
|
Since HyperWRT is based on the linksys 4.20.6 firmware, has anyone tried getting into client mode via: nvram set wl0_mode=sta nvram commit reboot wl ap 0 ?? It seems that the wl is crippled. I keep getting operation not supported error (which was the case with my original 4.00.7 firmware). the nvram did work, because after that, my linksys associates, but refuses to talk to my laptop. wl will not turn it into client mode though any thoughts?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823738.9/warc/CC-MAIN-20181212044022-20181212065522-00179.warc.gz
|
CC-MAIN-2018-51
| 445 | 1 |
https://princecheema.com/get-testing-credit-card-numbers-to-test-in-payment-forms-easily/
|
code
|
Today I found a very helpful page to get multiple credit card numbers for testing.
Which helped me in my projects to test those cards.
So, I want to say that many time we came into a situation where we need some test payment card to test payment functionality in our project.
And we came into these test credit card number which have number like 4111 **** **** 1111.
But sometime, these credit card details not worked on some payment forms, So here these payment card worked good for most of us.
This site is providing many testing credit card number which you can use in any testing payment forms.
Rest information such as expiry Month, Year can be any value of next year and month and for CVV numbers you can use 123
So I thought to share this information with you. Below is the link where you can see all the above information in action.
Hope the above information will help you a lot
Please feel free to let me know your feedback in comment section.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817438.43/warc/CC-MAIN-20240419141145-20240419171145-00598.warc.gz
|
CC-MAIN-2024-18
| 953 | 10 |
http://www.computerforums.org/forums/hardware/fan-control-question-204845.html
|
code
|
Re: Fan Control Question
There are things called fanbuses that will do this for you as well. There's a lot of great fanbus controllers out there, and while it's nice being able to control them on the motherboard, I find that having a fine grained control over them on the chassis itself is great when I need that extra blast of air, or want to quiet a PC down in the middle of the night.
Do you have a preference? Do you want them all to be on the motherboard? There's nothing wrong with that, and when you run out of spaces, you can, as you asked, put them directly on to the power supply, but the main drawback to this is that unless the fan has a PWM module (air temperature sensor) or an in-line fan speed switch, it'll run at full speed for whatever voltage it is being supplied, so you lose that fine grained control.
That's fine for lower speed or low noise fans, but sometimes you need a high CFM fan to get the case to the temp or airflow that you want, so plan ahead when possible.
If you'd like to see some fanbus modules, I can dig some up and post, as I'm sure others will as well if you're interested. One of the benefits to a fanbus is that it only requires one connector to your power supply, but can have up to 8 connections (so I've seen) for you to attach fans that it will both monitor and control.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719453.9/warc/CC-MAIN-20161020183839-00190-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 1,318 | 5 |
http://www.sonicyouth.com/gossip/showpost.php?p=667078&postcount=11
|
code
|
Chord - 3+ notes.
Chord change - moving from one set of notes to another.
The notes have different relationships to each other, depending on their frequency. Certain frequencies are considered 'dissonant', others 'consonant'. Some chords have consonant relationships with others; some have dissonant relationships with each other.
Obviously, Wiki will come to the rescue
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592150.47/warc/CC-MAIN-20180721012433-20180721032433-00264.warc.gz
|
CC-MAIN-2018-30
| 370 | 4 |
https://locusit.com/product/bigml-certified-engineer/
|
code
|
Objective of BigML
– Understand how to parameterize supervised and unsupervised methods to achieve better performance.
– Learn how to compose multiple methods together to better solve modeling problems.
– BigML sources and datasets.
– Supervised (Models, Ensembles, Linear Regressions, Logistic Regressions, Deepnets, Time Series, and OptiML) and Unsupervised (Cluster Analysis, Anomaly Detection, Association Discovery, and Topic Modeling) methods.
– 1-click Model, 1-click Ensemble, 1-click Linear Regression, 1-click Logistic Regression, 1-click Deepnet, 1-click Time Series, 1 click OptiML, 1-click Cluster, 1-click Anomaly, 1-click Association, 1-click Topic Model.
– Simple evaluations and metrics.
1. Modeling vs. Prediction of BigML
2. Supervised Learning with BigML Engineer
Decision Trees: Node threshold, Weights, Statistical Pruning, Modeling Missing Values.
Ensemble Classifiers: Bagging (Sample Rates, Number of Models), Random Decision Forests (Random Candidates), Boosting.
Linear Regression: Field Encodings.
Logistic Regression: L1 Normalization, L2 Normalization, Field Encodings, Scales.
Deepnets: Topologies, Gradient Descent Algorithms, Automatic Network Discovery.
Time Series: Error, Trend, Damped, Seasonality.
Evaluation: How to Properly Evaluate a Predictive Model, Cross-Validation, ROC Spaces and Curves.
OptiML: How to optimize the process for model selection and parametrization to automatically find the best model for a given dataset.
Fusion: Combination of models, ensembles, linear regressions, logistic regressions, and deepnets to balance out the individual weaknesses of single models.
3. Unsupervised Learning
Clustering: Number of Clusters, Dealing with Missing Values, Modeling Clusters, Scaling Fields, Weights, Summary Fields, K-means vs. G-means.
Association Discovery: Measures (Support, Confidence, Leverage, Significance Level, Lift), Search Strategies (Confidence, Coverage, Leverage, Lift, Support), Missing Items, Discretization.
Topic Modeling: Topics, Terms, Text analysis.
Anomaly Detection: Forest Size, Constraints, ID Fields.
Combination and Automation
ideal for software developers, system integrators, technology consulting, and strategic consulting firms to rapidly get up to speed with Machine Learning
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511002.91/warc/CC-MAIN-20231002164819-20231002194819-00569.warc.gz
|
CC-MAIN-2023-40
| 2,274 | 25 |
https://webapps.stackexchange.com/questions/7208/share-a-link-to-bing-maps-so-that-the-map-directly-opens-in-birds-eye-view
|
code
|
When I share a link to Bing maps, the map always opens in "standard mode". I want it to default or go directly to the Bird's Eye View mode.
What I want to do is this:
- Open Bing maps
- Switch to Bird's Eye View, zoom in, and rotate the view
- Share a link to that, with the exact settings that I made.
So when someone opens my link, it should directly open in Bird's Eye View, zoom in exactly like I did, and so on.
Is it possible to share a link like this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817187.10/warc/CC-MAIN-20240418030928-20240418060928-00870.warc.gz
|
CC-MAIN-2024-18
| 458 | 7 |
https://www.joshchrisafis.com/mr-handy.html
|
code
|
UX DESIGN // MOBILE APP PROJECT
How could I provide the end user a solution to their boredom and if I could, what would that look like?
The prototype of the mobile app was designed in Adobe XD, but all illustrations were created using Adobe Illustrator.
Mr. Handy is the mascot for the app as well as the personification of the app's concept. Mr. Handy is a helper, a friend, and a gloved hand with a mustache! Users will encounter more than one variation of Mr. Handy throughout their experience!
Before I knew what Mr. Handy would be, I conducted extensive research into which activities people enjoy in their free time around the Baltimore area (for the purposes of the prototype,) and learned a great deal about the cost, travel time, and ease of access that people favor.
Following the research to back up the designs of the app, I decided to include a mascot, named Mr. Handy, who would act as a helper to the user, and guide them toward the solution to their boredom. The app is pretty straight forward in that the user is asked a series of questions that will vary depending on the user's answer to the previous question. Upon completing the questionnaire, the app offers a solution to the user and the user can either learn more about the solution (venues in the area,) or start the questionnaire over for a different outcome.
There are other features tied into the experience which include messages, past picks (previous selections made by the user,) fast deals (daily deals that are offered to different venues the user has visited before,) settings (update account information for consideration of solutions offered by app,) and log out.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00064.warc.gz
|
CC-MAIN-2021-49
| 1,649 | 7 |
https://cricket.yahoo.com/blogs/yahoo-cricket-blogs/sachin-tendulkar-cv-look-12899.html
|
code
|
Ever wondered what the CVs of famous people would look like? No doubt, these would be extremely impressive and hard to emulate.
Some months back, this writer had compiled Sachin Tendulkar's CV, just for the kicks.
Here's an updated version of it. [You can download the PDF here]
[scribd id=45739067 key=key-1m0ihdh09i92lw3fxcr4 mode=list]
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544677.45/warc/CC-MAIN-20171214144324-20171214164324-00364.warc.gz
|
CC-MAIN-2017-51
| 338 | 4 |
http://www.terraforums.com/forums/cactus-and-other-succulents/115218-additions.html
|
code
|
Heres few more of my cacti additions and one is a succulent.
First up Notocactus Magnificus "Big Birtha" well thats what hubby calls it.. LOL!! Theres some dirt and perlite in the crevices i need to clean off tomarrow, repotting her was a pain in the rear got lots pookies from her due to her being so LARGE..
and this is Cereus Peruvianus Monstrose Minor, to me looks like a bunch of lit mini 4th of july sparklers all bunched together..
and this is my only succulent at this time, sooooooon to add more.
I belive this is called the Propeller Plant, its real velvety.
Its getting repotted tomarrow and into the right growing media..
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00408-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 633 | 6 |
https://goshare.co/software-engineer-job/
|
code
|
Join Our Team as a Software Engineer
Are you proactive, willing to take risks and responsibilities, motivated, and have a passion for creating and supporting great products? Do you thrive on collaboration, working side by side with people of all backgrounds and disciplines? Do you have strong verbal and written communication skills? GoShare is looking for a software engineer that is great at solving problems, debugging, troubleshooting, designing and implementing solutions to complex technical issues. Join our growing team and be one of our first 50 hires!
Our team is smart, hardworking and shares a sense of compassion for helping people. Our headquarters in San Diego supports our global team of employees, contractors and delivery professionals. We believe in collaboration, respect, and fairness when it comes to working with people. We have a work hard, play hard mentality. We believe work/life balance is important for success. Our office vibe is fast-paced and casual. T-shirts and jeans are allowed, especially GoShare t-shirts.
- You’ll work in an Agile, collaborative environment to understand requirements, design, code, and test innovative applications, and support those applications for our highly valued customers.
- You’ll employ Design Thinking to create products that provide a great user experience along with high performance, security, quality, and stability.
- Design and code servers, services, applications and databases that are reusable, scalable and meet critical architecture goals.
- Create Application Programming Interfaces (APIs) that are clean, well-documented, and easy to use.
- Create and configure Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) applications. Design and implement large scale systems and Service Oriented Architectures (SOA) that enable continuous delivery.
- Previous work with servers, applications, and databases
- Familiar with APIs
- Skilled in working with Linux system, parallel computing, system infrastructure, and software architecture
- Understand user and system requirements
- Have an interest in, understanding of, or experience with Agile development methodology
- Use Design Thinking to create products that provide a great user experience along with high performance, security, quality, and stability
- Bachelor’s degree or equivalent work experience in computer science, machine learning, AI, or data science
- You are proactive by nature and willing to take risks. You are motivated and have a passion for creating and supporting great products
- You are great at solving problems, debugging, troubleshooting, designing and implementing solutions to complex technical issues.
- Knowledge in building complex UI layouts applying solid software patterns and follow platform UI/UX design language and guidelines preferred, but not mandatory
Skills/Languages: Must know at least one of the languages below
- Shell script
Hours: Full-time, minimum of 40 hours per week
Compensation: Competitive salary, health benefits, matching 401k, stock options
Locations: San Diego, CA, or Fortaleza, Brazil. No relocation assistance. If San Diego based must be authorized to work in the US.
Company: GoShare Inc. 101 W. Broadway San Diego, CA 92101.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00262.warc.gz
|
CC-MAIN-2021-17
| 3,270 | 24 |
http://www.shoes.com/womens-david-tate-beverly-black-satin-p2_id220717
|
code
|
- DON'T PAY FOR 14 DAYS
Product & Brand Information
Customer Ratings & Reviews
88% would recommend to a friend.
- write a review
I wanted a strappy, stylish shoe for a formal wedding with a "reasonable" heel height. This is perfect. The wedding is this weekend, so I am not sure how comfortable it will be for the whole evening, but it fits well and feels comfortable right out of the box! Would prefer leather over satin finish, but.....
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542414.34/warc/CC-MAIN-20161202170902-00443-ip-10-31-129-80.ec2.internal.warc.gz
|
CC-MAIN-2016-50
| 438 | 6 |
https://leste.maemo.org/index.php?title=Extras/ScummVM&diff=prev&oldid=1232
|
code
|
Difference between revisions of "Extras/ScummVM"
|Line 51:||Line 51:|
Latest revision as of 18:29, 1 November 2021
ScummVM is a program which allows you to run certain classic graphical adventure and role-playing games, provided you already have their data files. The clever part about this: ScummVM just replaces the executables shipped with the games, allowing you to play them on systems for which they were never designed! ScummVM is a complete rewrite of these games' executables and is not an emulator. (From ScummVM website)
Relevant issue: https://github.com/maemo-leste/bugtracker/issues/269
Also see this page for more info: https://wiki.scummvm.org/index.php/Maemo
- Games seems to be playable, with sound and input
- GUI with default theme is too small to be clickable/visable. WIth theme scummvm-remastered (scummremastered.zip) it is possible to set GUI scale to 'Large' and get a readable GUI:
To get large readable GUI:
change gui_base=0 to gui_base=240
Neverhood, Ultima 4, Ultima 6, Grim Fandango, Broken Sword 1
Change keymap as appropriate, works well.
- Implement sane default keys for the devices (there is rudimentary detection for the N900, but we support more devices)
- Use libsdl2 build, didn't compile when User:Wizzup tried.
- Revise Maemo-Leste configure backend/target to remove 'optification'
- Create Widescreen readable theme
- Fix touchscreen to pointer position de-synchronization. (Starts okay, goes bad).
- Find rendering and build options for low battery use (adventure game sessions can be long)
- Get Maemo-Leste target into upstream SCUMMVM
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00331.warc.gz
|
CC-MAIN-2022-33
| 1,582 | 19 |
https://www.liquid-robotics.com/markets/environmental-assessment/wave-gliders-for-climate-change-data-co2-monitoring/
|
code
|
The dominant methods for gathering meteorological and oceanographic (METOC) data have been largely the same over the last 50 years – buoys, ships, and satellites. Yet the challenges we face today – from modeling climate change to predicting hurricanes and typhoons – require more data and greater flexibility. Cost and human risk have been large barriers to extended open ocean operations, until now.
The Wave Glider platform is an autonomous, unmanned surface vehicle that can host a range of surface and sub-surface sensors over durations up to a year in the open ocean while also providing real-time, two-way communications. Wave and solar-powered Wave Gliders can:
Leading scientists, governments, and corporations use Wave Gliders to gather a wide range of METOC data. Examples include:
While Liquid Robotics supports over 20+ different sensor integrations our customers and partners have performed over 30 additional types of sensor integrations.
Using the Wave Glider platform, our customers and partners are pushing the boundaries of science further and unlocking the potential of the blue ocean economy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532929.54/warc/CC-MAIN-20190421215917-20190422001917-00166.warc.gz
|
CC-MAIN-2019-18
| 1,118 | 5 |
https://clarity.fm/joelazar
|
code
|
I help companies figure out what to say, and how to say it. Could be a video, or a presentation, or an ad. But mostly it's helping with the story around high level keynote presentations and low level sales decks. Former salesforce guy, I helped Marc Benioff and his keynote speeches. Former Apple guy too, but no, I never built a keynote for Steve Jobs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00276.warc.gz
|
CC-MAIN-2023-06
| 353 | 1 |
https://www.rebeccabhaddenshm.com/coursera-machine-learning-regularization-parameter/
|
code
|
Lets talk first about : Coursera Machine Learning Regularization Parameter … The 2 ex-CEOs of, Andrew, and Daphne, are no longer actively handling the company themselves. In 2018, Daphne Koller founded Insitro, an innovative company that intertwines drug discovery and machine learning.
is still a relatively brand-new company, and I am actually interested to see what the future will look like.Just how much does cost?
Private courses cost $29 to $99, but in a lot of cases, they can be audited for free.’s online degrees, nevertheless, can cost anywhere from $15000 to $42000.
Plus is’s annual subscription service through which learners can access all 3000+ courses, expertises and expert certificates with limitless access. The strategy uses outstanding worth for student such who take online courses often.
Is worth It?
Yes, is legit and worth the expense. is among the most affordable MOOC websites currently out there. Countless university-backed online courses make it extremely appealing for MOOCs, and the brand-new subscription-based Plus uses exceptional value for frequent online students.
How does generate income?
‘s annual profits is approximated to be around $140 million and the majority of it comes from paid online courses, Specializations, MasterTracks, online degrees, and enterprise customers. The international business e-learning market size is growing astonishingly rapidly, and it’s also ending up being a significantly big part of’s revenue.
When you explore the course brochure, you’ll right away discover there’s a lot on offer. The catalog includes courses in humanities and arts, sciences, organization, IT, languages, individual development, and more.
as perhaps the very best machine finding out course ever and i sort of agree with that since it’s quite a good course however back in 2015 this course was a little bit excessive for me since after a number of lessons i understood i needed to go back to the basics however even if i started this course was so motivating for me due to the fact that i recognized there’s a lot of things that i require to discover when it comes to artificial intelligence and it was extraordinary inspiration to get going with machine learning and after that get to where i am now so played a huge function when it concerns my profession and my motivation and i can not thank them enough for that having this in mind let’s go through some advantages that you may have and also through some unreasonable expectations that many of you may have since we all know that the e-learning space and the e-learning market is growing rapidly and together with we have many other e-learning platforms such as you understand a cloud master or udemy or pluralsight there are many choices out there for instance for cloud services a cloud expert is great and likewise for anything tech related pluralsight is excellent and i utilize them all i use all of them right i used both pluralsight for many months and numerous times for numerous months because i desired at various times to up my abilities and i likewise utilize for example in 2013 2014 i have actually been using udemy the thing but a lot resembles with you to me nowadays i do not truly use it that much because it’s excessive noise on that platform due to the fact that everyone’s doing a lot of courses nowadays you get a great deal of people that don’t have a lot of experience in numerous fields and they just do courses on udemy
because there’s no vetting process there and because of that there is a great deal of sound naturally you have a lot of good courses there but they get lost because widespread amount of of reasonably i do not know typical courses but nevertheless uw still has some great courses and i have a video about the very best machine learning course on udemy go and check that a person out however once again since we have so many platforms that create courses and provide certifications this dilutes the significance of one specific accreditation so you require an edge when it comes to these accreditations and sort of has that edge because it uses courses from leading universities and they’re rather inexpensive and likewise you get these courses from these top universities are likewise tape-recorded by professionals in the field so you get this kind of impact due to the fact that the courses and the accreditations that you receive from they still have some sort of reputational benefit compared to
other platforms so in my opinion coursera i believe is the very best platform if you want to get a certification because you still have that track record that kind of flows below the university onto you as an individual and also having these accreditations helps you since you can add them to your linkedin profile for example or to your cv i suggest possibly not to your cv but clearly if you include them to your linkedin profile you can promote yourself and therefore you can indicate the reality that you know those topics also it reveals the truth that you are a lifelong learner and this is extremely crucial for companies because they want to see a person that constantly wishes to up their skills all right you desire somebody that constantly is interested in enhancing that is in this sort of self-improvement mode that they never just get comfy with the position that they remain in due to the fact that everybody kind of likes ideal everyone loves a self improver everyone
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00097.warc.gz
|
CC-MAIN-2023-50
| 5,461 | 12 |
https://conservativeamericanews.com/sanctuary-in-the-suburbs-naperville-councilman-urges-elites-to-shelter-illegal-immigrants/
|
code
|
Don’t Let Your Family Suffer From A Food Shortage
Save Save $60 on a 4-Week / 3-Month Emergency Supply ^^^
In this special report, Gary Franchi delves into the controversial proposal by Naperville City Councilman Josh McBroom, suggesting affluent citizens host illegal immigrants in their homes. We explore the broader implications of this radical idea amidst the escalating border crisis. How are other cities like Chicago responding? Are major airports becoming migrant shelters? What does this mean for Biden’s policies and the myth of sanctuary cities? Tune in for an eye-opening look at local solutions to a national issue, contrasting Biden’s approach with Trump’s leadership. Don’t miss our Final Thought on why this story matters to every American.
Check out our Link Tree: https://linktr.ee/nextnews
Sub to the channel: http://nnn.is/Sub-to-N3
Get On Our Email List: http://nnn.is/email-newsletter-next-news
FOLLOW US ON SOCIAL!
Copyright Disclaimer: Citation of articles and authors in this report does not imply ownership. Works and images presented here fall under Fair Use Section 107 and are used for commentary on globally significant newsworthy events. Under Section 107 of the Copyright Act 1976, allowance is made for fair use for purposes such as criticism, comment, news reporting, teaching, scholarship, and research.
Community Guidelines Disclaimer: The points of view and purpose of this video is not to bully or harass anybody, but rather share that opinion and thoughts with other like-minded individuals curious about the subject.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817081.52/warc/CC-MAIN-20240416093441-20240416123441-00866.warc.gz
|
CC-MAIN-2024-18
| 1,566 | 9 |
https://stats.stackexchange.com/questions/367521/how-does-a-fitted-linear-mixed-effects-model-predict-longitudinal-output-for-a-n
|
code
|
I fitted a linear mixed effects model using nlme package for aids dataset.
Here, CD4 is the CD4 cell count, obstime is the time of observation, and patient is the patient id.
My linear mixed effects model looks like this:
lmeFIT <- lme(CD4 ~ obstime, random = ~ 1|patient, data=aids_train)
I have split my dataset into training and testing set where, my testing set consists of data from 2 subjects and training set consists of data from the remaining subjects. The model shown above, fits random intercepts for different patients in the training dataset. Now, my questions are
- Once the model has been fitted, how exactly does mixed effects model predict outputs for new patient ids in my testing dataset?
- All the examples I have seen online show that using mixed effects models we can plot fitted lines with different intercepts for each subject in the training dataset. However, how do we know what the intercept would be for the new dataset in the testing dataset? What is the mathematical explanation for that?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740733.1/warc/CC-MAIN-20200815065105-20200815095105-00145.warc.gz
|
CC-MAIN-2020-34
| 1,018 | 7 |
https://logit.io/sources/configure/ruby_on_rails/
|
code
|
Follow this step by step guide to get 'logs' from your system to Logit.io:
Step 1 - (Optional) Creating an Application
A sample application will be created that will send logs to Logstash.
Create a folder that will host your application on your machine.
Using your CMD prompt or Terminal Editor, enter the following command:
rails new sample_app
The new application, which is called
sample_app, has been created. Navigate to the new application folder.
The final part of the setup is to enter the following command into CMD prompt or Terminal:
This starts the server. Opening a browser and navigating to
http://localhost:3000 will display the Rails splash page, showing that the setup was successful.
Step 2 - (Optional) Adding Sample Code
The next step is to create a controller where the code will be entered for sending a log to Logstash. The following command, will create a controller called
rails generate controller pages
The new controller has been created in the app/controllers folder with the name
pages_controller.rb. Open this file using a text editor, the controller is blank with just the following text contained inside the file.
class PagesController < ApplicationController
An action called
home should be added to the controller. Some text will be added to ensure that the application is running correctly. The code inside the controller will look as follows:
class PagesController < ApplicationController
message = "Logging test!"
@greeting = message
Next a view needs to be created for the action. Copy the file
mailer.text.erb from the app/views/layouts folder into the app/views/pages folder. Rename the copied file to
home.html.erb. Open the copied file with a text editor, remove all existing content and replace with the following code:
<h1>Ruby logger test app!</h1>
<p><%= @greeting %></p>
home.html.erb file is a simple view that will be displayed after the code in the home method of the pages controller has actioned. A root request must be added to ensure that this happens and is to be added to
routes.rb located in the config folder:
config/routes.rb. Opening the file will reveal that currently there are comments within, any comments should be removed (they start with #) and replaced by the following:
root to: 'pages#home'
A Refresh of the browser (or restart of the server if it was stopped previously, using the command:
rails server) will display the new page.
Step 3 - Setup Logstash Logger Plugin
Using CMD Prompt or Terminal editor, the user must be in the sample_app directory and enter the following command:
gem install logstash-logger -v 0.26.1
There is a file in the top level directory of the application called
Gemfile. When opening this file with a text editor many references to different gems can be seen e.g.
gem 'rails', '~> 5.2.1'
A reference is required for the newly installed
logstash-logger gem and needs to be added, this is done as follows:
# Use logstash-logger
gem 'logstash-logger', '~> 0.26.1'
A reference is also required at the very top of the
pages_controller. The code to do this is as follows:
Step 4 - Add TCP-TLS Logging
Step 5 - Add TCP Logging
Step 6 - Check Logit.io for your logs
Data should now have been sent to your Stack.
If you don't see logs take a look at How to diagnose no data in Stack below for how to diagnose common issues.
Step 8 - Ruby Logging Overview
Ruby is an open sources object oriented programming language created in the mid 90s by Yukihiro Matsumoto. It is used by some of the web’s most popular sites including Shopify, Twitch, Twitter, Airbnb and Github.
Ruby is well known for being easily comprehensible and has a syntax comparable to that of C and Java, it is also equally suited for front end and back end development. Ruby also supports the majority of operating systems including Linux, Windows & Mac.
Ruby log events and errors can often be seen in two common locations, inline with the program’s execution and in separate log files in an output such as /var/log/ruby.log
Ruby log levels include the following five statuses listed in decreasing priority order; FATAL, ERROR, WARN, INFO and DEBUG
For live debugging, being able to see your errors in the program’s execution is useful but for longer term log management an external solution is required for efficient processing, parsing and reporting.
Thanks to our ELK as a Service platform, Logit.io makes parsing and managing your logs from Ruby easy and also provides actionable insights that can be used by your entire engineering and development team. Our platform is able to centralise all of your logs across numerous programming languages, tools, and cloud services that you use daily.
If you need any more assistance in analysing your Ruby logs we're here to help. Feel free to get in touch by reaching out to our support team via Intercom and we will be happy to assist.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816070.70/warc/CC-MAIN-20240412194614-20240412224614-00799.warc.gz
|
CC-MAIN-2024-18
| 4,846 | 58 |
https://community.oracle.com/message/10166100
|
code
|
This content has been marked as final. Show 8 replies
SQL developer freezes when huge set of data is extracted. As you say it is 15million records, use Toad if you have.
Spool the output in a txt file with columns Comma separated.
After spooling , Split the file into parts and save in XL sheets if you are not using latest MS-Excel.
Use OS commands to split large file. ( Much easier on Unix )
910874 wrote:Sorry, but this just sounds plain stupid.
The result set of one of my query contains around 15 million rows. Am trying to copy the resultset to an excel ....
EXCEL IS NOT A DATABASE!
Never mind the fact that Excel does not even support 1 million rows in a worksheet. It is not designed and not developed to deal with crunching and analysing millions of rows. A database is.
Use your software tools correctly. Treat Excel as a spreadsheet and the database as a database.
910874 wrote:No. You can connect your Excel directly to the database server via ODBC.
Is there a way to create a pivot table in spreadsheet use my the result set in SQL developer as a source?
Excel can thus pass a SQL query to Oracle. Oracle can do the major crunching of data, returning a manageable and relevant data set to Excel. Excel can then handle the rendering of that data set and provide additional analysis functionality on that data set.
To connect Excel to SQL-Developer, is client application to client application connectivity. This is not the norm. And requires both applications to support some kind of Inter Process Communication (IPC) interface to talk to one another. In Windows, a typical interface for this is (or was) DDE - Dynamic Data Exchange. This has evolved into more complex OLE, ActiveX and COM interfaces, but is still supported and a fairly easy thing to implement when you code an Windows application.
SQL-Developer is however a platform agnostic Java application that runs on multiple operating systems. And it is unlikely to provide extensive Windows style IPC integration using DDE/OLE/ActiveX/COM.
Here are the Excel limitations:
Worksheet size 1,048,576 rows by 16,384 columns
Worksheet size 65,536 rows by 256 columns
15 million records is far beyond that, not to mention a very bad use case of Excel.
Like others said, just use the database to do the crunching and Excel to just display the relevant data.
You might want to look into "Excel MS Query" (under Data -> From Other Sources -> Microsoft Query in Excel 2010). This will allow you to build a pivot table in Excel linked directly to your source database (via your ODBC connection). I'd expect performance with 15m records would be a problem but MSQuery lets you summarize and filter the data. Maybe you'd want get your result set down to < 100k rows and you could always create a couple different pivot views.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00073-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 2,786 | 21 |
https://sdivakarrajesh.medium.com/?source=post_internal_links---------5----------------------------
|
code
|
The face recognition module provides us the functions to load a photo, get the encodings of the faces in the photo and also compare them, saying if the two encodings match or not.
We’ll use two photos of Kylie(one with black hair and another with blonde hair) and one photo of Khloe
Running the program gives this result. Even though…
Whenever I open up an npm project workspace on VSCode I usually run a few commands on startup. VSCode has been my favorite Text Editor of choice for quite sometime now, primarily due to the available customization options and the wide variety of extensions. And it didn’t fail me here😛.
ctrl + `
ctrl + shift + `
npm run test
What do we, developers, do when we realize that we are doing things again and…
Especially now during the COVID-19 period😷, when you are forced to work from home🏡, you somehow have to access the servers and databases in your office’s intranet.
But there must be that one(or more) machine(s) that acts as load balancer setup(the machine exposed to the internet — let’s call it the “proxymachine”), that distributes to external traffic to the servers inside the intranet. We’re…
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072366.31/warc/CC-MAIN-20210413122252-20210413152252-00474.warc.gz
|
CC-MAIN-2021-17
| 1,167 | 10 |
https://powerusers.microsoft.com/t5/General-Power-Automate/Gateway-attachment-size-limit/m-p/907995
|
code
|
I have a flow that scraps an email for text and attachments. Then passes through an on-premises gateway to an internal api using a custom connector.
This process completes fine as long as the attachment is under 3 Mb. If the attachment is over I receive the following error,
No, I couldn't find any information to increase the limit. I ended up checking for attachment size and then landing the large files on an internal server (can pass these through the gateway to a file system but not to the api call...) along with a powershell script that was dynamically created with the api details. In the flow i ran a desktop flow ( unattended, need another licence for this) that ran the powershell script to call the api and upload the attachment, then delete both files.
Its a long way around but it works.
Hey Mate @bevanmedley
There is a limit on the Gateway attachment size limit.
If this reply has answered your question or solved your issue, please mark this question as answered. Answered questions helps users in the future who may have the same issue or question quickly find a resolution via search. If you liked my response, please consider giving it a thumbs up. THANKS!
Learn how to create your own user groups today!
Check out the new Power Platform Community Connections gallery!
Join us, in-person, December 7–9 in Las Vegas, for the largest gathering of the Microsoft community in the world.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358688.35/warc/CC-MAIN-20211129044311-20211129074311-00252.warc.gz
|
CC-MAIN-2021-49
| 1,406 | 10 |
https://ocaml.org/p/alg_structs/0.1.3/doc/index.html
|
code
|
See the Api Reference.
An library specifying algebraic structures and category-theoretic idioms useful in the design and implementation of software.
It aims to provide useful modules that are (correctly) based on algebraic and category theoretic structures rather than mathematically precise representations.
Currently, this library should be viewed as an experiment to determine whether easy access to such mechanisms can be used to any advantage in OCaml programs.
The library is modeled after a fragment of Haskell’s rich ecosystem of algebraic structures implemented via typeclasses. However, liberties have been taken to adapt the implementations to be more amenable to idiomatic OCaml where it seemed appropriate.
Each structure includes a signature
S which gives its specification.
S specifies the core types and operations of the structure as well any additional functions derived from those core aspects.
S includes extensions which are derived from the properties of the structure, and is not a mathematically precise representation of the underlying structure
Most of the structures can be built up from a
Seed. Where applicable, a structure's
Seed specifies the essential types and operators needed to elaborate out the extended structure.
Users are free to implement their own fully customized versions of a structure, or to build one from a
Seed and then override whichever functions they want. See each structure for relevant examples.
Every structure includes a parameterized module called
Law. The laws are expressed as predicates that should be true for any arguments of the specified type. The
Law serves both as documentation of those necessary properties of a structure that cannot be encoded in the type system and as a tool for checking that your own implementations are lawfull.
If you implement a structure satisfying some spec, you should ensure it follows the laws. You can use the package
alg_structs_qcheck to help generate property based tests for this purpose.
Assuming you have
Applicative.List.((^) <@> ["a";"b"] <*> ["1";"2"]) (* - : string list = ["a1"; "a2"; "b1"; "b2"] *)
let some_sum = let open Option.Let_bind in let+ x = Some 1 and+ y = Some 2 and+ z = Some 3 in x + y + z let () = assert (some_sum = Some 6)
let tupples_of_list_elements = let open List.Let_bind in let+ x = [1; 2] and+ y = ['a'; 'b'] in (x, y) let () = assert (tupples_of_list_elements = [(1, 'a'); (1, 'b'); (2, 'a'); (2, 'b')])
module Tree = struct module T = struct type 'a t = Nil | Leaf of 'a | Node of 'a t * 'a * 'a t let rec fold_right ~f t ~init = match t with | Nil -> init | Leaf x -> f x init | Node (l, x, r) -> fold_right ~f ~init:(f x (fold_right ~f ~init r)) l end include T include (Make (T) : S with type 'a t = 'a T.t) end
let tree = Tree.T.Node(Leaf 1, 2, Node (Leaf 4, 3, Leaf 5)) Tree.max tree ~compare:Int.compare;; (* - : int option = Some 5 *) Tree.min tree ~compare:Int.compare;; (* - : int option = Some 1 *) Tree.to_list tree;; (* - : int list = [1; 2; 4; 3; 5] *) Tree.length tree;; (* - : int = 5 *)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00101.warc.gz
|
CC-MAIN-2022-33
| 3,040 | 25 |
http://www.webmasterworld.com/printerfriendlyv5.cgi?forum=5&discussion=1144&serial=245461&user=
|
code
|
Well, hopefully I have the best of both worlds. I do know the tecnical/geek parts as well as learning the SEO parts. I also have 3 great partners, one total techie and the others salesmen. I think we have a very viable solution however we are an unknown company with four employees, us. But, IMHO, if more companies started to worry about SEO at the beginning of projects instead of the end, a lot of folks here would be working for giant corporations and be extremely unhappy in their careers. Instead, from what I have read, a lot of you love working out of your house, love the extra time with your kids/spouse/signifigant other. So would the web be a happier place if ALL companies started projects with SEO I kinda doubt it. Of course all this is just conjecture on my part, I could be wrong.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010934950/warc/CC-MAIN-20140305091534-00019-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 797 | 1 |
https://lutpub.lut.fi/handle/10024/34858
|
code
|
Grudinschi, Daniela (2005)
Aineistoon ei liity tiedostoja.
Workflow technology is expanding rapidly. In doing so, new technologies are employed. The internet, which is one such technology, could allow every user within an organization to make use of workflow. The internet- based workflows are discussed in this thesis from technical and, also, from economical points of view. First, as an ampler introduction, there are presented the basic concepts related to this topic: the workflow concept, about processes and workflows and the workflow management system. Also in this introduction it is discussed about the XML language and the overview of the Web Services stack. Then is explained how the internet-based workflows work: is presented the architecture of an internet-based enterprise and, also, the flows between web-services. Finally, there are presented, briefly, some workflow languages. In addition, based on this knowledge, a sample workflow was implemented.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00500.warc.gz
|
CC-MAIN-2021-39
| 968 | 3 |
https://forum.duolingo.com/comment/460577/Le-dottoresse
|
code
|
33 CommentsThis discussion is locked.
I think in any language, if you use the masculine form for a woman, it will sound weird, but OK, because it is like... the standard. However, if you use feminine to men, that will sound offensive.
For example "Thank you" in Portuguese is "Obrigado" (the standard form) If you're a woman, you'd say "Obrigada". If a woman says "Obrigado", that's ok. But if a man says "obrigada", that would sound weird.
In Italian, there is a separate word for a male doctor and a female doctor. In English, there is not. When you give a translation it has to be in correct English. So, dottoresse is just "doctor". "Lady doctor" is wrong because it not a proper English word. See here:
The double consonants are held longer which may result in it sounding like two words. Doubles are not divided into separate syllables but for a newbie it might be taught that way to help you with the longer held sound.
A similar problem is the "z" and "tz" sounds in The Leaning Tower of Pisa or Pizza.
Check out number 5 in this link. :)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00574.warc.gz
|
CC-MAIN-2023-06
| 1,046 | 7 |
https://cybervengers.club/en/
|
code
|
Welcome to the world of the CyberVengers.
Aïa, Ben, Clara, Liam and Sango will teach you all about online risks and help you avoid them. Check out the first episodes in this series and share them with your friends, and why not your parents? They could also learn a thing or two! ;)
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00282.warc.gz
|
CC-MAIN-2024-10
| 282 | 2 |
https://leicester.figshare.com/articles/journal_contribution/Stellar_multiplicity_affects_the_correlation_between_protoplanetary_disc_masses_and_accretion_rates_binaries_explain_high_accretors_in_Upper_Sco/19572715
|
code
|
Stellar multiplicity affects the correlation between protoplanetary disc masses and accretion rates: binaries explain high accretors in Upper Sco
In recent years, a correlation between mass accretion rates onto new-born stars and their protoplanetary disc masses was detected in nearby young star-forming regions. Although such a correlation can be interpreted as due to viscous-diffusion processes in the disc, highly accreting sources with low disc masses in more evolved regions remain puzzling. In this paper, we hypothesize that the presence of a stellar companion truncating the disc can explain these outliers. First, we searched the literature for information on stellar multiplicity in Lupus, Chamaeleon I, and Upper Sco, finding that roughly 20 per cent of the discs involved in the correlation are in binaries or higher order multiple stellar systems. We prove with high statistical significance that at any disc mass these sources have systematically higher accretion rates than those in single-stars, with the bulk of the binary population being clustered around Mdisc/ ˙Macc≈0.1Myr. We then run coupled gas and dust one-dimensional evolutionary models of tidally truncated discs to be compared with the data. We find that these models are able to reproduce well most of the population of observed discs in Lupus and Upper Sco, even though the unknown eccentricity of each binary prevents an object by object comparison. In the latter region, the agreement improves if the grain coagulation efficiency is reduced, as may be expected in discs around close binaries. Finally, we mention that thermal winds and sub-structures can be important in explaining few outlying sources.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00786.warc.gz
|
CC-MAIN-2022-21
| 1,691 | 2 |
https://www.learningreviews.com/swift-playgrounds-app
|
code
|
If you want to code your own iPad and iPhone apps, this programming app designed for middle and high school students, is a great starting point. You don’t need any programming know-how to start coding. The Swift Playgrounds app teaches you how.
Kids first use code to solve a series of puzzles. After they understand the basics, they take on more complex coding challenges.
If you want to use Swift Playgrounds in the classroom, the app has two free associated Teachers Guides available from Apple Education:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474482.98/warc/CC-MAIN-20240224012912-20240224042912-00612.warc.gz
|
CC-MAIN-2024-10
| 510 | 3 |
https://www.ziprecruiter.com/c/CyberCoders/Job/Mid-Level-.NET-Developer/-in-Arlington,VA?jid=ac3fbc713a690913&lvk=8Yl_G1uSlYaui6Iw4K5E7g.--LtIO5mwnZ&tsid=152000406
|
code
|
Mid-Level .NET Developer
CyberCoders Arlington, VA
- Posted: over a month ago
- $110,000 to $130,000 Yearly
Salary Range: $110k - $130k
Requirements: .NET Core, ASP.NET, C#, SQL Server, Visual Studio, Kubernetes/Docker
Based in the DC area, we are one of the premier Fintech Mortgage companies in the US. Due to growth, we are actively seeking to hire a Mid-Senior Level .NET Developer to join our team in Washington, DC. The ideal candidate will have expertise with C#, SQL Server, .NET Core, ASP.NET, Visual Studio, and RESTful APIs. Any experience with Kubernetes, Docker, Typescript, Aurelia, Angular, and Fintech are pluses. If this sounds like you, please apply now or send your resume to [email protected]!
-Gathering and organizing business requirements and translating into functional specs
-Translate application storyboards and use cases into functional modules
-Design, build and maintain efficient, reusable, secure, and reliable code
-Develop novel solutions for data mapping and data translation
-Ensure the best possible performance, quality, and responsiveness of applications
-Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues
-Develop unit tests and test automation scripts
-Help maintain code quality, organization, and automatization
-Deep experience with the ASP.NET framework, SQL Server, and design/architectural patterns
including Model-View-Controller (MVC), Web API, and ASP.NET web forms
-Demonstrated knowledge of HTML/CSS/JS and front-end development tools (but not our core focus)
-Proficiency with service-based architecture styles (REST, RPC), and API-first designs
-Strong if not near expert level experience with C#, Visual Studio, and MS toolsets
-Solid experience in Microsoft SQL Server and building analytical applications
-Experience with the Microsoft development toolset, including Visual Studio and SQL Management
-Experience with Agile methodology, and use of development tools such as Jira and GitHub
-Experience with DevOps disciplines and toolsets such as Kubernetes, Docker, TeamCity, and Octopus Deploy
-Demonstrated ability to work independently and as part of a team
-Minimum of BSc/BA in Computer Science, Engineering, or a related field
Applicants must be authorized to work in the U.S.
CyberCoders, Inc is proud to be an Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work – In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00564.warc.gz
|
CC-MAIN-2021-10
| 2,811 | 30 |
https://www.dipprofit.com/web3-jobs-lead-software-engineer/
|
code
|
Web3 Jobs: Lead Software Engineer
Job Type: Remote
Status: Full Time
Organization: Cerebellum Network
Salary: $85,000 – $130,000 yearly
Are you someone who considers yourself to be an exceptional problem solver, not just limited to coding?
Have you spent years refining your skills in innovation, yet find yourself craving for more intricate and challenging problems to solve?
Do you feel thrilled at the prospect of being a significant contributor to the release of a software platform that can potentially impact millions or even billions of people?
If your answers to all three questions are a definite yes, then we would love to have the opportunity to connect with you!
What We Expect:
At our cutting-edge web3 infrastructure scale-up project, we are seeking exceptionally talented developers who are not only looking for a job but are also eager to be challenged to achieve great things in a thrilling venture.
Our team comprises veterans of numerous successful startups who are committed to putting you on the fast track toward your future success.
This position requires a thorough understanding of large-scale distributed computing and web3 technologies. However, we are open to considering individuals with less experience in web3 but possess a track record of building large, robust data solutions and are keen on growing into web3.
You will immediately be challenged to take charge of critical development track(s) within one of our innovation squads: 1. Decentralized Data Cloud Squad, 2. Blockchain Squad, 3. Tools, Services & Integrations Squad.
Once you are assigned to a squad, you will be responsible for driving the design, prototyping, testing/implementation (including integral tests), simulating, and CI/CD of essential components of our platform in a highly collaborative and iterative manner across squads.
Cere Network is a cutting-edge decentralized data protocol that aims to revolutionize the future of web3. It facilitates trustless content sharing and cloud data interactions among various entities, including applications, users, AI/ML, and (NFT) assets. The platform has received support from some of the biggest names in the industry, including Binance Labs, Republic Labs, and Polygon, to name a few.
At its core, Cere Network aims to create a seamless and secure data-sharing environment for users and businesses, where they can share data and interact with various applications without worrying about security and privacy concerns. The platform leverages the power of blockchain technology to ensure that all data is stored securely and is accessible only to authorized parties.
- We seek teammates who will thrive in our fast-paced work environment, where we default to methodical, simulation-driven, fast development iterations and a first-principle thinking mindset.
- We crave teammates with high standards and strong discipline, embracing a growth mindset to continuously learn and incrementally improve habits and processes.
- We require contributors to have excellent communication skills (esp. written), for everything must be well organized and tracked in Notion, Slack, Wiki’s, etc. We want autonomous, goal-oriented individuals who embrace transparency and accountability. No one wants to micro-manage others.
- We need good teammates who are generally cool people who want to be part of a great team & decentralized community where everyone truly helps and challenges each other to learn/grow by innovating together towards greater shared goals. Embracing the building of such a collaborative community is the only way we can sustain rapid innovation (and the only way to live/work, really).
- 5+ years of experience (preferred 10+) working as a software engineer on storage and distributed systems.
- Extensive programming experience with at least one modern language such as Go, Rust, Typescript, Java or Kotlin.
- In-depth understanding of decentralized systems, blockchain, and web3, as well as system design, data structures, and algorithms.
Nice to have:
- Expertise in database engine internals (storage), including indexing, access methods, concurrency control, logging, caching, transaction processing, replication, backup restore, and buffer management.
- Proficiency in database engine internals (query processing), such as query compilation, optimization, execution, and parallel execution.
- Experience with decentralized storage systems like IPFS.
- Experience developing SDKs and contributing to open-source projects.
- Knowledge of distributed systems, including consensus-based quorum replication and NoSQL system implementation.
Join our exceptional multicultural team that operates globally, with offices in San Francisco, New York, Warsaw, Amsterdam, Berlin, and various locations in Asia. At our organization, we embrace our ethos to enable remote working for our team members, but we also recognize the importance of in-person interaction. Therefore, many of our teams travel to meet up every 1-2 months.
We offer our team members a high degree of autonomy and flexibility, which allows for a balanced and enjoyable life and work experience. However, transparency, accountability, and ownership are essential prerequisites for working with us.
If you are interested in joining a talented and diverse team that values innovation, creativity, and collaboration, we encourage you to apply to our organization today.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00330.warc.gz
|
CC-MAIN-2024-10
| 5,386 | 33 |
https://community.sonarsource.com/t/sonarcfamily-no-issues-only-duplicate-code/4692
|
code
|
Using SonarQube 7.4 developer edition with SonarCFamily 5.1.1.
I have followed the instructions here https://docs.sonarqube.org/pages/viewpage.action?pageId=8520225 to try to do analysis of a C++ /C# project on a TFS build agent.
I believe I have done this correctly as the log shows “INFO: Using build-wrapper output: C:\agent\sonar\bw_output\build-wrapper-dump.json” and says it has created metadata for all the files I would have expected. However the analysis only shows code duplication and no other issues, I have also purposely put errors in the code.
I have put the output on as verbose debug and there are no errors or warnings.
If anyone can point me in the right direction that would be great.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00188.warc.gz
|
CC-MAIN-2018-51
| 708 | 5 |
https://day9.tv/d/b/day9tv-dailies/?tags=jaeyun
|
code
|
I'll likely be continuing my drafting and doing SOME constructed becuase I need Big Chandra in my life :)
After what seems like an eternity of not playing with Kevin, I'm FINALLY BACK! I fully expect him to catch me up on the meta, how to play, how to succeed, how to not throw, and what the right items ...
We FINALLY return for a single episode of Mostly Walking now that i'm back from Hong Kong!
After a few days of collection building, I'm excited to actually construct some of the neato ideas that have been lurking in my brain whilest in Hong Kong!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524517.31/warc/CC-MAIN-20190716075153-20190716101153-00155.warc.gz
|
CC-MAIN-2019-30
| 554 | 4 |
https://bonitavalley.com/madeformorehome/
|
code
|
Made For More
The question isn’t — are you made for more? You are!
The question is — how do you discover and experience the more God made you for?
Part 1 | Discovering More Practices!
Jeff Brawner | 9.4.2022
Part 2 | The Good Stuff!
Jeff Brawner | 9.11.2022
Part 3 | Super-Charged And Connected!
Jordan Brawner | 9.18.2022
Part 4 | Pursuing God’s Best!
Jeff Brawner | 9.25.2022
Part 5 | Genuine Greatness!
Jeff Brawner | 10.2.2022
Part 6 | An Invitation For More!
Mike Teixeira | 10.9.2022
Part 7 | Life Maximizing Attitudes!
Jeff Brawner | 10.16.2022
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00479.warc.gz
|
CC-MAIN-2022-49
| 559 | 17 |
https://thexxxmoviestore.com/viewer-discretion/
|
code
|
Must see Puss see!!! Sit back. Tune in. Jack off!!! Vivid presents a Monty Python-style parody of one producer’s beloved proper serial that becomes a showcase for porn right under his stuffy British nose!!! From French new wave to old silent movies to a porno commercial for a home decoration show, every style is skewered and screwed!!! And each features Vivid’s exotic erotic Malezia, with her long sinewy body a virtual playground for the horny. Will our host have a coronary? Or just a nervous breakdown? Either way, it’ll be an episode you don’t want to miss. Viewer Discretion… it is definitely advised.
You can watch Viewer Discretion in its entirety as well as 772 other full length feature movies right now! No catch. No per minute fees. No bull. All the xxx movies are included in your membership. These xxx movies feature your favorite porn stars!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.3/warc/CC-MAIN-20230923194908-20230923224908-00620.warc.gz
|
CC-MAIN-2023-40
| 868 | 2 |
https://wiki.lyrasis.org/pages/diffpagesbyversion.action?pageId=79795226&selectedPageVersions=102&selectedPageVersions=101
|
code
|
Stanford’s linked data production project focuses on technical services workflows. For each of four key production pathways we will examine each step in the workflow, from acquisition to discovery, to determine how best to transition to a linked data production environment. Our emphasis is on following each workflow from start to finish to show an end-to-end linked data production process, and to highlight areas for future work. The four pathways are: copy cataloging through the Acquisitions Department, original cataloging, deposit of a single item into the Stanford Digital Repository, and deposit of a collection of resources into the Stanford Digital Repository.
Stanford Project Proposal
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00057.warc.gz
|
CC-MAIN-2020-10
| 699 | 2 |
https://www.aimedical.com.au/blog/fourier-intelligence-is-the-official-platinum-sponsors-of-rehabweek-2022
|
code
|
RehabWeek is a week-long event that brings together different conferences in the field of rehabilitation technology at the same time and place in order to foster cross-disciplinary communication and the development of relationships between different players.
The RehabWeek includes common keynote lectures and other mutually organized sessions, such as panel discussions and poster sessions. In addition, each conference also organises its own, conference-specific sessions. Visitors can freely choose which conference to attend at any given time.
Feel free to write to us at [email protected] to enjoy special discounts complimentary of IISART.
We hope to see you at Rotterdam this 25th July 2022!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474653.81/warc/CC-MAIN-20240226062606-20240226092606-00799.warc.gz
|
CC-MAIN-2024-10
| 698 | 4 |
https://piksel.mk/career/php-developer/
|
code
|
Get paid for being awesome!
DON`T SEE A POSITION THAT FITS?
We`re always looking for talented people
Piksel is a fast growing company with 6 years of experience in the digital marketing world.
We focus on web development and digital marketing while creating successful working teams, sharing knowledge, and promoting team spirit. Thus, we strive to give our team members the best possible grounds for personal and professional growth.
Piksel is looking for an intermediate PHP developer with experience in building high-performing, scalable, enterprise-grade applications being part of a talented software team that works on mission-critical applications.
As a PHP developer, his/her responsibilities will be:
- Designing and developing PHP applications for mission-critical systems and delivering high-availability and performance.
- Contribute in all phases of the development lifecycle.
- Write well designed, testable, efficient code.
- Ensure designs are in compliance with specifications.
- Support continuous improvement by investigating alternatives and technologies.
- Ability to work within an agile team
- Bachelor in Computer Science or equivalent
- Smart, knowledgeable, curious and enthusiastic people
- Must be independent, responsible, self-motivated, with the ability to learn and achieve superior results
- Fluent English is required, additional languages are a plus
- Know how to work independently and be proactive in order to add value to projects.
- Good interpersonal skills and taste in contact with the client, with the desire to get involved in the development of a growing company.
- Practice Clean Code
What we are looking for:
- Minimum of 2+ years PHP development experience, PHP/WordPress custom
- development is a plus
- Designing databases with MySQL
- Familiar in the design and architecture of medium & large-scaled systems
- Experience with dependency management, source version control, issue tracking tools, and testing frameworks (for example: Git, Jira)
- Basic knowladge of Laravel Framework is welcomed
- Highly competitive monthly rate
- Rolling contract with multiple chances for extension
- Opportunity to work in a market leading organization working at the forefront of technology
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00496.warc.gz
|
CC-MAIN-2021-17
| 2,227 | 30 |
https://www.integromat.com/en/templates/cloudmersive
|
code
|
You can choose from hundreds of templates. You can use them as they are or customize them to suit your needs.
Automatic email sender with an email validator by Cloudmersive. Validating email addresses is very important and a step that prevents errors. Additionally in this scenario, an email will be sent to a selected email address and inform n detail about the error.
Automatically convert Microsoft documents to PDF and upload them back to a selected folder in One Drive
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00067.warc.gz
|
CC-MAIN-2022-27
| 473 | 3 |
https://tailieunhanh.com/vn/tlID49209_professional-information-technologyprogramming-book-part-132.html
|
code
|
tailieunhanh - Professional Information Technology-Programming Book part 132
Tham khảo tài liệu 'professional information technology-programming book part 132', công nghệ thông tin, kỹ thuật lập trình phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả | The default MDA in OpenBSD is popa3d but it s pretty limited so some tips on installing and configuring the IMAP and POP3 portions of Courier and Cyrus are also below. . popa3d This is OpenBSD s default POP3 MDA. You could read the popa3d manpage and then figure out a good command to put into etc to start it at boot time but it s more easily initialized through inetd. Edit etc and uncomment this line pop3 stream tcp nowait root usr sbin popa3d popa3d popa3d requires no configuration because it only fetches mail from local mailboxes so you have to have a user account on the system with a non-null password. It s pretty simple and should do perfectly for most servers that only need email for a small number of users. If you need IMAP or if you need something a little more complex you should probably use Courier-IMAP instead. . Courier-IMAP Technically Courier is a complete MTA but many people just use the MDA portion of it to deliver messages to local or virtual user accounts. In OpenBSD only the MDA portion of Courier Courier-IMAP is available and it s in usr ports mail courier-imap. Before you install it make sure you check out the pkg directory and the Makefile to see what flavors are available. Specifically you can build in support for LDAP MySQL PostgreSQL and POP3. At the end of the Courier-IMAP installation you ll be given a screenful of instructions. Basically they are 1. Make a configuration directory 2. mkdir etc courier-imap 3. Copy over the default configuration files to it 4. cp usr local share examples courier-imap etc courier-imap 5. Put this line in etc to start Courier-IMAP at boot time 6. mkdir -p var run courier-imap usr local libexec authlib authdaemond start 7. Edit your config files in etc courier-imap then generate OpenSSL certificates with the mkimapdcert script. . Cyrus-IMAPd There s a memory mapping incompatibility between Cyrus and OpenBSD so if you use this MDA you could have some performance .
đang nạp các trang xem trước
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154163.9/warc/CC-MAIN-20210801061513-20210801091513-00567.warc.gz
|
CC-MAIN-2021-31
| 2,284 | 3 |
https://www.rockpapershotgun.com/igf-factor-2012-prom-week
|
code
|
Next up in our series of chats with this year's Independent Games Festival finalists is Mike Treanor and Josh McCoy from the UC Santa Cruz team behind ambitious high school-based social simulation/strategy game Prom Week - which is in the running for the Technical Excellence gong at IGF 2012. Here, they talk flirting, 'social physics', bathrooms and their answer to the most important question of all.
RPS: Firstly, a brief introduction for those who may not know you. Who are you? What is your background? Why get into games? Why get into indie games?
Mike: I am Mike Treanor and I am a PhD student in the Expressive Intelligence Studio at UC Santa Cruz and I am a lead on Prom Week. My focus is on design (and tons of coding of course). I got into making videogames because they seemed like the ultimate medium for expression (the whole artgame thing). With priorities like that, you pretty much have no choice but to be indie!
Josh: My name is Josh McCoy and, like Mike, I am a Ph.D. student at EIS in the Center for Games and Playable Media at UC Santa Cruz. I come from both sociology/anthropology and computer science backgrounds and really like the intersection between the two. I've been a gamer for as long as I can remember and enjoy a wide variety of games from sports to LARPing to PC games. By luck, I found my way to working with Michael Mateas in EIS and have had a chance to bring all of my passions together making games and doing research in the intersection of humanities, AI and game design. Indie games seemed like a natural fit for the type of work we do.
RPS: Tell us about your game. What were its origins? What are you trying to do with it? What are you most pleased about it? What would you change if you could?
Mike: Prom Week is a social simulation/strategy game. In it, you control the social actions that a group of high school students take with one another in the week before their prom. Each character's desires and responses are formed by over 5,000 social considerations (e.g. "if you're nice to me, I'll be nice to you", or "I'll be pissed if my friend flirts with someone I have a crush on").
The game is incredibly dynamic. If a level has a goal to get two characters to date, there are countless ways to pull it off. Sorta like how Crayon Physics makes use of its simulation of physics to enable emergent solutions to puzzles, we make use of the "social physics" we built. Also, Prom Week does all this while having actual dialogue (rather than icons in thought bubbles like The Sims).
I can't believe I'm saying this, but if I could change something about the game, I would add achievements.
Josh: The Prom Week was created to explore an in-development AI system model of small group social interaction. This was a direct reaction to the "social games" implicitly encoded behaviours for Grace and Trip in FaÁade. The goal was to take these behaviours and use them explicitly -- any character should be able to pick up and use a social game to further their own social agenda. Comme il Faut, the social AI system in Prom Week, provides a procedural social environment for characters to use these first-class patterns of social behaviour at any time they are appropriate. The author only has to write each behaviour once and it's available to all the characters. After all, why not make social interaction and character dialogue as procedural as combat systems commonly are?
I am very happy that Prom Week turned out to be an experience people enjoy. We aimed high and had a lot of room for failure. It's not often that a research system for interactive dramas gets implemented into a playable experience meant to be widely played and I feel we have created a game that is fun and compelling for a lot of players.
Adding "gossip about", "spread rumour" or "ask them out for me" options for the player is at the top of my wishlist for Prom Week. Right now most of the interactions are dyadic with the AI system bringing in 3rd characters were appropriate. Having the player choose a 3rd character and how to use them would add a new level of fun.
RPS: What are your feelings on the IGF this year? Pleased to be nominated? Impressed by the other finalists? Anything you worry has been overlooked?
Mike: I'm totally pleased! Being an IGF finalist is the highest honour any of us could have expected when we started this. It validates all of our hard work and crazy ambition (Prom Week really is insanely ambitious).
As for the other finalists, I love the ones I know and look forward to seeing the others. Stand outs for me are Storyteller, Spelunky and GIRP.
Josh: I have to echo Mike -- being nominated alongside the best of indie developers is quite an honour. I've played and enjoyed indie games for years and I'm glad to be able to contribute to the community.
Each of the finalists' games (and the honourable mentions for that matter) look awesome! I am looking forward to playing them. Very impressed! In particular Storyteller (for obvious reasons), Botanicula, Dear Esther (<3 ghost stories), Frozen Synapse and Antichamber are games I really want to get my hands on.
RPS: Which game would you like to see take the Grand Prize this year?
Mike: No strong opinion. All awesome!
Josh: My sympathies to the jury...
RPS: How do you feel about the indie scene of late? What would you like to see from it in the near future?
Mike: I feel good about it. Passionate people making things they want to make.
I sorta want to say that I wish more people would take bigger risks. You can't deny that AAA games grow up a little every time you see an original indie game get a lot of attention. But I don't know, fun indie games that don't take many risks are awesome too.
Josh: As a developer passionate about AI, more please! There is so much new, fertile ground to break in this space. We've got a good grasp on physics and Euclidean space; it's time push forward other frontiers.
RPS: And how does the future look for you, both in terms of this game and other projects?
Mike: I'm very excited about the future. Prom Week will release soon, and I am really excited to see what people think about it.
I'm wrapping up the PhD next year. On the way, I will be finish up "Cartoonist", a collaboration with the GA Tech Newsgames group that can generate games that represent ideas, as well as make one more game that revolves around in game economies. Heck with Prom Week finished, I may even have time to finish a side project "diary game" that I've been sitting on for years (it's about being in bathrooms).
Josh: I'm working on my dissertation now and should be on the job market in June. I'm looking forward to prototyping some new game ideas, pushing forward social AI systems, and taking a hard look at AI-based game design (while practicing, of course).
RPS: If you could talk to the monsters in Doom, what would you ask them?
Mike: Oh, I don't talk to strangers.
Josh: Um, hey! Where's the yellow key in The Citadel? k thx
RPS: Thanks for your time
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506481.17/warc/CC-MAIN-20230923130827-20230923160827-00402.warc.gz
|
CC-MAIN-2023-40
| 6,984 | 31 |
https://community.eero.com/t/m2l34z/eero-plus-dns-used
|
code
|
eero plus - DNS Used?
Before signing up for eero plus, I switched my eero's over to using Open DNS. I've noticed on the app that the ability to change my DNS has been disabled, but what is odd, is that my devices are still using Open DNS. I thought everything ran through ZScaler with eero plus, am I mistaken? My content filtering, etc. works as expected, I'm just confused why everything is set to Open DNS, unless this is by eero's design. I've verified my devices obtain the DNS automatically, so they're getting the Open DNS address from the eero.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00764.warc.gz
|
CC-MAIN-2023-14
| 552 | 2 |
https://www.visualsvn.com/company/news/visualsvn-server-3.4-released
|
code
|
We are happy to announce VisualSVN Server 3.4 release that brings the following main new features and enhancements:
- Update to the latest Apache Subversion 1.9 release.
- New VisualSVN Server PowerShell cmdlets.
- Other improvements such as disaster recovery of VDFS repositories and improved Markdown support in the web interface.
For further details please consider the complete VisualSVN Server 3.4 Release Notes.
Upgrade to the VisualSVN Server 3.4 is recommended for all existing VisualSVN Server users. Download VisualSVN Server 3.4 at the main download page.
Upgrade to Apache Subversion 1.9
Apache Subversion 1.9 is a major Apache Software Foundation release that brings a lot of user-visible changes both to the client and server side. For the complete list of notable improvements please consider Apache Subversion 1.9 Release Notes.
VisualSVN – a professional grade Subversion integration plug-in for Microsoft Visual Studio – has been upgraded to Apache Subversion 1.9 as well. You can download the latest VisualSVN 5.1 at the corresponding download page.
VisualSVN Server PowerShell Cmdlets
VisualSVN Server 3.4 introduces more than twenty PowerShell cmdlets that can be used to perform various management tasks related to Subversion repositories, access rules and VDFS replication. For the brief description and usage examples of all available cmdlets please consider the KB88: VisualSVN Server PowerShell Cmdlet Reference article.
PowerShell cmdlets can be used to manage remote VisualSVN Server instances and are available in all editions of VisualSVN Server, including the free-of-charge Standard Edition.
Other VisualSVN Server 3.4 changes
VisualSVN Server 3.4 introduces a number of other significant improvements, such as the following:
- Improve disaster recovery capabilities for distributed VDFS repositories. New PowerShell cmdlets allow changing roles of the distributed repositories — from master to slave and vice versa, as well as performing disaster recovery with minimal service interruption. For further details please consider KB93: Performing disaster recovery for distributed VDFS repositories article.
- Display repository size and other technical details in VisualSVN Server Manager. This information is available on the new Details tab in the repository’s Properties dialog.
- Improve Markdown support in the Repository Web Interface. Readme files for the current directory are now displayed automatically and relative links to other Markdown files or images are supported. To get a detailed impression please check the sample Markdown documentation project hosted on our online demo server.
- Preview images in Repository Web Interface. All common image file formats are supported.
For the complete list of changes, see the VisualSVN Server 3.4.0 changelog.
End of Support for VisualSVN Server 2.5.x and 3.2.x version families
Since Subversion 1.7.x version family is no longer supported by the Apache Software Foundation, we are announcing End of Support for VisualSVN Server 2.5.x versions. In order to reduce the list of supported version families, we are also announcing End of Support for VisualSVN Server 3.2.x version family. Users of VisualSVN Server 2.5.x and 3.2.x should upgrade to one of the supported versions listed below.
Upgrading from VisualSVN Server 2.5.x to newer versions may require additional administrative actions if you have Subversion authentication enabled on your server. For further details please consider the KB63 article.
We are going to continue providing maintenance updates for the following version families:
- VisualSVN Server 2.7.x (based Subversion 1.8.x),
- VisualSVN Server 3.3.x (based Subversion 1.8.x),
- VisualSVN Server 3.4.x (the most recent release based on Subversion 1.9.x).
Upgrade and compatibility concerns
VisualSVN Server 3.4 is backward compatible with older Subversion clients. It can read and write to repositories created by earlier versions as well, so there is no need to dump/load or upgrade your repositories.
Upgrade to VisualSVN Server 3.4 is recommended for all users. Read the KB89: Upgrading to VisualSVN Server 3.4 article before upgrading. Upgrade is free for Standard Edition users and all customers who have an active maintenance subscription for VisualSVN Server Enterprise Edition licenses.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00711.warc.gz
|
CC-MAIN-2022-49
| 4,312 | 29 |
https://tsrsmegabyte.com/accomplishments/ai-day-2022/
|
code
|
Members of the computer society and the ICT department co-organised the school’s first-ever AI Day on the 13th of May.
During junior and senior long lunch, multiple installations were set up in cluster A to demonstrate the capabilities of applications of AI. These included:
- An interactive display that prompted students to identify real human faces from AI-generated ones (developed by Rohan Kapur)
- An AI-powered story generator, which generates short stories from a given set of 4 words (developed by Arjun Sharma)
- A piano built of foil and circuits and attached to the foil that plays musical notes when stepped upon (developed by Dhruv Kapur)
- An AI-powered orchestra, in which a laptop webcam would track a user’s body movement and automatically change the behaviour of a virtual orchestra connected to a speaker system. (sourced from experimentswithgoogle)
The organising team consisted of Rohan Kapur (Head of ICT), Advay Gupta (grade 12), Arjun Sharma (grade 11), and Dhruv Kapur (grade 9).
Email [email protected] in case you have any queries about the society or the website.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00374.warc.gz
|
CC-MAIN-2024-10
| 1,109 | 8 |
http://db.naturalphilosophy.org/book/?bookid=1253
|
code
|
Relativity: Einstein's Lost Frame (Buy Now
KeyWords: relativity, einstein
Rodrigo de Abreu
This work intends to revisit and look in depth to the questions of Absolute Space and Relativity. In particular, its purpose is to provide an alternative derivation of the effects described by Special Relativity, based on a description that assumes a privileged reference frame.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00145.warc.gz
|
CC-MAIN-2020-24
| 369 | 4 |
https://forums.gentoo.org/viewtopic-t-980364-start-0-postdays-0-postorder-asc-highlight-.html
|
code
|
Tux's lil' helper
Joined: 21 Mar 2004
|Posted: Fri Jan 03, 2014 4:07 am Post subject: start-stop-daemon: /usr/sbin/gdm died
|It's been a while since I last rebooted my system (at least 6 months, maybe more). Unfortunately I had a hardware failure which caused it to stop. I got it back up and running w/ replacement hardware, but now when booting I get the following message during the bootup (ie. before the login manager starts):
|start-stop-daemon: caught an interrupt
start-stop-daemon: /usr/sbin/gdm died
ERROR: could not start the Display Manager
That's highlighted in red, otherwise I likely wouldn't have noticed it. Now here's the weird part: very shortly after that comes up, the gnome login screen starts and seems to work just like it always has. I can log in and everything seems fine.
I've updated system and world (I've even done an emerge -e world), but note that I've masked Gnome 3 entirely (using it at work and not ready for that change yet).
One thing that seems different to me is that the time to bring up the login screen is much earlier in the boot process. I'm guessing there's some sort of "quick boot" going on, but if I go back to the boot screen (Ctrl+Alt+F1) I see servers and maybe some other stuff is still starting. I wonder if that's related.
Thoughts or suggestions?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00514.warc.gz
|
CC-MAIN-2021-04
| 1,302 | 11 |
https://sites.google.com/edtechteam.com/kimsuttonssummitsessions/
|
code
|
Kim Sutton's Summit Sessions
Welcome to my Summit resource page.
Click around to find the resource you are after.
If you can't find the resource you are after, you are probably in the wrong session.
If you can't find the resource you are looking for and you are in the correct session, holla at me.
Like, now, do it right now. Don't feel bad for interrupting. We can't move on with the session if you can't find the slides. It's late and I tend to forget things... It's not you, it's me.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00361.warc.gz
|
CC-MAIN-2019-47
| 487 | 6 |
https://www.b4x.com/android/forum/tags/firebase-crashlytics/
|
code
|
I followed the instructions on this page (https://www.b4x.com/android/forum/threads/crashlytics-crash-reports.87510/) to implement Crashlytics in my application, but I am not receiving any error reports on the Google console.
I've upgrade b4a and firebase to the newer version to continue to receive crashlytics from my application even after 15 november 2020.
I continue to receive crashlytics fine but without custom key. (Custom Key where set by a Crashlytics module that use fabric)
This is a wrapper of Firebase Crashlytics library for B4i. I made this for @Jack Cole and he gave me permission to post this in forum to help other users.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510462.75/warc/CC-MAIN-20230928230810-20230929020810-00661.warc.gz
|
CC-MAIN-2023-40
| 641 | 4 |
http://z80homebrew.blogspot.com/2009/07/memory-problems.html?showComment=1247415120353
|
code
|
There are still a few problems with memory, running a memory test routine at 4MHz throws up a few spurious errors every now and again, fewer at 2MHz and non so far at 1MHz....more capacitor swapping might be in order. I'll get the soldering iron warmed up!
I've got a simple PS/2 PIC microcontroller program working - well it will display the scan code of the currently pressed key on a row of LEDs and send / receive keyboard commands. I'll hook this up to the PIO when I get it going.
First though I think I'll have a crack at getting the SIO going. I've got an SIO/2 chip here and a MAX232, should be enough to get a terminal interface going!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823303.28/warc/CC-MAIN-20181210034333-20181210055833-00289.warc.gz
|
CC-MAIN-2018-51
| 645 | 3 |
https://lawprofessors.typepad.com/healthlawprof_blog/2022/12/an-urgent-call-to-integrate-the-health-sector-into-the-post-2020-global-biodiversity-framework.html
|
code
|
Friday, December 9, 2022
Simon King (Independent), Chris Lemieux (Wilfrid Laurier University), Melissa Lem (Independent), An Urgent Call to Integrate the Health Sector into the Post-2020 Global Biodiversity Framework (2022):
There is a rapidly closing window of opportunity to stop biodiversity loss and secure the resilience of all life on Earth. In December 2022, Parties to the United Nations (UN) Convention on Biological Diversity (CBD) will meet in Montreal, Canada, to finalize the language and terms of the Post-2020 Global Biodiversity Framework (Post-2020 GBF). The Post-2020 GBF aims to address the shortcomings of the previous Strategic Plan on Biodiversity 2011-2020, by introducing a Theory of Change, that states that biodiversity protection will only be successful if unprecedented, transformative changes are implemented effectively by Parties to the CBD. In this policy perspective we explore the implications of the Theory of Change chosen to underpin the Post-2020 GBF, specifically that broad social transformation is an outcome that requires actors to be specified. We detail how the health sector is uniquely positioned to be an effective actor and ally in support of the implementation of the Post-2020 GBF. Specifically, we highlight how the core competencies and financial and human resources available in the health sector (including unique knowledge, skill sets, experiences, and established trust) provide a compelling, yet mostly untapped opportunity to help create and sustain the enabling conditions necessary to achieve the goals and targets of the framework. While by no means a panacea for the world’s biodiversity problems, we posit that explicitly omitting the health sector from the Post-2020 GBF substantially weakens the global, collective effort to catalyze the trans-formative changes required to safeguard biodiversity.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100534.18/warc/CC-MAIN-20231204182901-20231204212901-00386.warc.gz
|
CC-MAIN-2023-50
| 1,864 | 3 |
http://www.successconsciousness.com/blog/category/happiness-fun/page/3/
|
code
|
Yesterday, I went to the cinema with my wife and kids to watch the movie The Social Network. It is a story about the creation of the social networking website Facebook, about the founders of Facebook, and about the lawsuits against its founder Mark Zuckerberg.
One night in 2003, Harvard student and computer programming Mark Zuckerberg, played by Jesse Eisenberg, begins working on a new idea. Between blogging and programming, he starts something soon to become known as Facebook. Starting with no money, except some small amounts invested by his friend Eduardo Saverin, played by Andrew Garfield, he created a website that had members only in Harvard university, but soon included other universities, and then the whole world.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661778.22/warc/CC-MAIN-20160924173741-00036-ip-10-143-35-109.ec2.internal.warc.gz
|
CC-MAIN-2016-40
| 729 | 2 |
https://blog.stevedoria.net/20100331/problem-reports
|
code
|
The responsibility of a project’s outcome is shared between developers and managers. Developers depend on managers to effectively manage projects, and managers depend on developers to provide reports that serve as the basis of project management decisions.
Possibly due to insufficient transparency, a problem may be detected after attempting to use a fully implemented and unit-tested software component. The integration phase of a software life cycle is a common, but undesirable, phase to detect interface issues. The problem, found during the integration phase, potentially reopens tasks that were considered complete. Design documentation might need to be updated, software might need to be re-implemented, and software might need to be retested. This increases the difficulty of meeting a project schedule, which may already be tested during integration.
The person that detects the problem has the responsibility to make the problem known. Resolving the issue among developers of the affected modules may be possible, but in general, resolution will involve too many people and too many tasks. Utilizing the managerial structure is pragmatic and effective in coordinating a resolution. A line manager or immediate supervisor is an entry point of using the managerial structure, so reporting the problem to that manager is reasonable.
Without a problem report, management is unaware of the problem and unable to manage the project with respect to the problem. Reporting the problem to management is a practice that increases project visibility. Management may decide that the problem shall not be resolved, because of schedule pressure or the severity of the problem. Management may decide that it shall be resolved within the constraints of the project schedule. Management may also decide to resolve the issue and revise the project schedule. Management must make the decision for the project, but the need for a decision must be made visible by developers through problem reports.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510983.45/warc/CC-MAIN-20231002064957-20231002094957-00172.warc.gz
|
CC-MAIN-2023-40
| 1,991 | 4 |
https://smartboxgames.com/category/ipad/page/2/
|
code
|
The physical book is a fun read but the iPad versions brings the characters to life. I was the Project Manager on this book and I had a great time working with the team. The brainstorming sessions were great; the list of fun ideas was huge. Below is a just released video of the book. Outback Odyssey was developed by iStoryApps.com.
For the last several months I have had great fun working with www.istoryapps.com. They are a publisher and developer of high quality books for the iPad. Most of you know me as a game developer, but my background is in education, specifically teaching children how to read using technology. I leveraged that experience to create games for adults such as WordPop! When I got the change to work on some children books I jumped at the chance. My role was to work with authors and illustrators and brainstorm with them how the team could transform their book into an interactive iPad iBook. There are several books in the queue at Apple but below is one of the first books I worked on. Working with Woody was fun and inspiring. The more he understood the platform the more ideas we had. Well, enough writing. Please watch the video below and let me know what you think.
Last week, on May 13, we submitted an update to WordPop! iPad to Apple. We were excited about this new release because it supported landscape. Players would be able to enjoy WordPop! Volt on the iPad in all orientations. I personally really enjoy playing in landscape mode a bit more than portrait as I find holding the iPad horizontally easier.
After a week, we heard back from Apple, the update was REJECTED. To quote Apple, “The iPad Human Interface Guidelines state that only one popover element should be visible onscreen at a time. On launch, and when the user taps the “Add Player” button, and additional popover is displayed for the user to enter a player name. Screen shots are attached for your reference.”
My team and I are aware of the one popover limit but the second popover is a dialog which I did not consider in the same class as a popover. Additionally, Apple had approved the two previous submissions of WordPop! Volt which has “Add Player” working exactly the same way.
Apple has been very good to us in that they usually provide a screen shot and description of what is wrong.
The fix is straight forward, we need to remove the first popover “Change Player” when “Add Player” is selected. This should be done any second now and we will resubmit.
Lesson learned, it does not matter what Apple calls the widget being used in the iPad User Interface Guidelines, only one “popover” at any given time.
Update: May 24, 2010 – We resubmitted WordPop! Volt. Below is a screen shot on the change. This new look conforms to the iPad UI guidelines.
My thoughts about how I think I will be using my iPad. I plan to review this post in a month to see how reality compares to the dream.
What I am really looking forward to is email, which is silly for such an expensive device.
I typically wake up pretty early in the morning and check my email on my iPod Touch. If there are emergencies, I get up, if not I try to sneak in another 30 minutes.
In some cases I bring my laptop to bed and do email on it. But then in the morning I need to set it back up in my home office, not a big deal but it gets tiring. The irony of all of this is the recognition that although a laptop is portable the iPad is even more so because of the weight and form factor.
I often tweak my web pages late at night in bed too, fixing a page here and there (nothing serious as I need my large screen for real work) but if I could do simple change to text or styling with the iPad that would be great. I will have to wait and see if there any FTP apps for the iPad.
Watching Netflix will be big for me. My wife and I watch a lot of movies together but we have very different tastes in TV shows. I am more apt to watch Dr. Who or Lost. So I watch a lot of those shows on my laptop. Having the iPad for that will be great.
I often go to Seattle for the day and I just need something light. Email, light word processing, maybe look up a store or get directions in Google Maps. The iPad will be great for that. There are tons of free Wi-Fi spots in Seattle or I can use my friend’s Wi-Fi at their house.
Lastly, playing games and reading books will be great.
I can see using my iPad as my third screen and have HTML5 or CSS reference material up when I am working on my web site.
In addition to creating my own iPad games I contract as a Project Manager / Designer and I am currently working on a few iPad apps for a client. Having a real device to test on will be a pleasure.
OK, let’s see how I wind up using the iPad over the next month.
The iPad is being marketed as a very casual device as demonstrated by Steve Jobs on stage while sitting on a couch. The only way he could have looked more relaxed would be if he was in a t-shirt and boxers drinking a beer. His point was well taken by many including my team, the iPad will be used in the living room, den or some other communal space. This makes the iPad a shared device. Let me say that again, unlike the iPhone, which you might loan to someone for a brief moment, such as a friend at a coffee shop, the iPad is meant to be a shared device.
What does this mean for WordPop!? We’ve concluded that WordPop! will be shared among family members or friends, thus we will need a sign-in. This will allow several family members to start and play their own games and it will allow individual players to save multiple games. This is fantastic feature. One game could be played with the goal of getting the highest score ever on Medium Level while another game could be dedicated to making high scoring words for the Global All Time Best Words List. Even better, another game could be saved for a child who wants to practice making words (we’ve heard from several parents they use WordPop! in this way). Another advantage of having a sign-in is we can get a name up front for the High Score and Best Words pages.
I for one can’t wait until Wyatt finished with sign-in as I too want to play several games at once each with a different goal.
If you are a developer and thinking of having sign-in make sure to plan this up front as it is a complicated feature if not thought out early. You will want to list out which items are saved per player and which items should be global, such as posting scores to our server. If you would like further information about our sign-in flow, please feel free to email me.
Look for more peeks into our development of WordPop! for iPad in coming blogs. Please share this blog and follow Smart Box Design on Twitter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00796.warc.gz
|
CC-MAIN-2022-49
| 6,706 | 25 |
https://docs.jina.ai/index.html
|
code
|
Welcome to Jina!#
Jina is a MLOps framework that empowers anyone to build cross-modal and multi-modal applications on the cloud. It uplifts a PoC into a production-ready service. Jina handles the infrastructure complexity, making advanced solution engineering and cloud-native technologies accessible to every developer.
Applications built with Jina enjoy the following features out of the box:
Build applications that deliver fresh insights from multiple data types such as text, image, audio, video, 3D mesh, PDF with Jina AI’s DocArray.
Support all mainstream deep learning frameworks.
Polyglot gateway that supports gRPC, Websockets, HTTP, GraphQL protocols with TLS.
Intuitive design pattern for high-performance microservices.
Scaling at ease: set replicas, sharding in one line.
Duplex streaming between client and server.
Async and non-blocking data processing over dynamic flows.
Seamless Docker container integration: sharing, exploring, sandboxing, versioning and dependency control via Executor Hub.
Full observability via Prometheus and Grafana.
Fast deployment to Kubernetes, Docker Compose.
Improved engineering efficiency thanks to the Jina AI ecosystem, so you can focus on innovating with the data applications you build.
Free CPU/GPU hosting via Jina Cloud.
Make sure that you have Python 3.7+ installed on Linux/MacOS/Windows.
pip install -U jina
conda install jina -c conda-forge
docker pull jinaai/jina:latest
Now that you’re set up, let’s create a project:
jina new hello-jina cd hello-jina jina flow --uses flow.yml
docker run -it --entrypoint=/bin/bash jinaai/jina:latest -p 54321:54321 jina new hello-jina cd hello-jina jina flow --uses flow.yml
Run the client on your machine and observe the results from your terminal.
python client.py ['hello, world!', 'goodbye, world!']
Executor is a self-contained logic unit that performs a group of tasks on a
Flow orchestrates Executors into a processing pipeline to build a multi-modal/cross-modal application
Join our Slack community and chat with other community members about ideas.
Join our Engineering All Hands meet-up to discuss your use case and learn Jina’s new features.
Subscribe to the latest video tutorials on our YouTube channel
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00453.warc.gz
|
CC-MAIN-2022-49
| 2,220 | 29 |
https://lists.debian.org/debian-arm/2009/12/msg00067.html
|
code
|
Bug#562867: RM: openoffice.org [armel] -- ROM; ANAIS; toolchain breakage; package unusable
please remove openoffice.org for armel from experimental and unstable.
There is a long-standing toolchain breakage which prevents it from starting
http://lists.debian.org/debian-openoffice/2009/12/msg00172.html + thread
(the "workaround" there - taking the old built .so from a old binary
package - is not applicable here and would be against policy anyway)
I disabled armel the 1:3.1.1-12 upload. Will be reenabled wnen some
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00051.warc.gz
|
CC-MAIN-2018-09
| 516 | 7 |
http://inform.ikd.kiev.ua/integration/
|
code
|
- Space Research Institute of National Academy of Sciences of Ukraine, National Space Agency of Ukraine, Kyiv, Ukraine
- V.M. Glushkov’s Institute of Cybernetics of National Academy of Sciences of Ukraine, Kyiv, Ukraine
- Karen Moe, National Aeronautic and Space Development Administration, U. S. A.
- Jean-Pierre Antikidis, Centre National D'Etudes Spatialas, France
- Dana Petcu, Western University of Timisoara, Romania
- Ivan Petiteville, European Space Agency, Italy
The main goal of the project is the development of new methods for heterogeneous data fusion and development of Grid technologies for their implementation and visualization of results. Developed method will be implemented as operational Grid-services for agricultural and natural disaster monitoring in the interests of sustainable development and security. Developed infrastructure should be considered as Ukrainian contribution into development of Global Earth Observation System of Systems (GEOSS, http://www.earthobservations.org/). The proposed infrastructure could become a segment of European Space Agency (ESA) Grid Processing on Demand (G-POD) for Earth Observation Applications (http://gpod.eo.esa.int). The developed methods and Grid infrastructure will also provide contribution to the International Charter "Space and Major Disasters" (http://www.disasterscharter.org) helping to mitigate the effects of disasters on human life and property.
Application Areas of the ProjectAs a result of completion of the project new methods for integration of data of different nature will be developed. Within this project the following scientific results of practical interest will be obtained:
- new methods for integration of data of different nature, namely from different satellite instruments (optical and microwave) and modeling data, to vegetation state monitoring and soil moisture estimation
- intelligent techniques for Earth observation data processing
- Grid implementation of developed methods and algorithms
- Grid service template solution for geospatial data archive and data access components
- Grid service framework for applied problems solving of vegetation state estimation, soil moisture assessment, drought and flood monitoring with implementation
- Reusable template for visualization system of geospatial data in Grid environment
- Modular data assimilation system for unified acquisition of geospatial data
- Grid implementation of data publishing system with OGC WMS and OGC WCS interfaces for external applications and decision support systems
- Service for floods prediction and monitoring
- Service for droughts prediction and monitoring
- Service for plants state monitoring
Developed methods could be applied in different domains. Implementation of the proposed methods on the base of Grid technologies will enable its broad application in distributed information systems, in particular GEOSS, ESA Grid Processing on Demand (G-POD), Ukrainian Academician Grid segment, EGEE project and international Grid system of space agencies Wide Area Grid (WAG). Implementation of applied Grid services in operational mode will enable a new level for decision support and will server as added-value to GEOSS system development.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496694.82/warc/CC-MAIN-20190220210649-20190220232649-00223.warc.gz
|
CC-MAIN-2019-09
| 3,223 | 20 |
https://nulll.net/music/about.php
|
code
|
I started playing guitar mid 1997. Almost right away, I found in things I created a medium for texts I wrote in those days.
My earliest music was influenced by the genres I listened to at the time, mainly black metal, darkwave and gothic. Gradually I began creating a sound of my own.
By the time I had the first few things that could be called actual songs, proper music, I had gone through a number of potential band names, and settled on Until
Death Overtakes Me somewhere in 1999. My music at that point still had ties with metal and darkwave, but also gained ambient and experimental elements.
After a couple tests, a first, and limited, release happened in 2001. Another album followed shortly after, then the first release was rewritten and made available
again. By then, UDOM's sound had evolved towards a combination of slow doom metal and ambient/wave.
I continued experimenting with sounds, resulting in a number of side-projects, most with their own releases, while UDOM put out a few more albums.
For a number of reasons I had to take a break from music, starting mid 2011, and lasting until late 2015. During this time I focussed on programming computer games,
but still managed to create some music.
From early 2016, I began releasing UDOM material again. Still experimenting with sound, I ended up with a number of side-projects as well, and some of the old
projects became active, too.
From here on, I aim to make new material available on my website each month, be it new UDOM tracks or albums, or stuff by my side-projects. As has ever been
the case, the majority of this music will be available as free downloads.
email : [email protected]
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506420.84/warc/CC-MAIN-20230922134342-20230922164342-00498.warc.gz
|
CC-MAIN-2023-40
| 1,672 | 14 |
https://www.bridgeall.com/2022/07/28/an-overview-of-azure-migrate/
|
code
|
Most organisations have gone through or have discussed migrating to the cloud. However, without the right approach and required solutions it can be a challenge. Microsoft have recognised this as an area that many organisations need help with and that’s where Azure Migrate comes in.
What is Azure Migrate?
Azure Migrate provides a central hub of tools to start, run, track, and analyse your migration journey to Azure.
Azure Migrate is a free Microsoft service, with tools for each scenario available without any additional licensing costs.
The Azure Migrate hub integrates Azure services and partner solutions to maximize your options and inform your decisions. Using the hub, you can keep all your migration data in one place for a comprehensive view across workloads and tools.
Azure Migrate assessment tool
Azure Migrate also helps you understand your existing workload and create a migration plan. Azure Migrate offers a hub of tools that help you plan your migration.
Choose agent-based or agentless methods to discover machines and applications in your environment and understand how they interact with each other. Choose from Microsoft or partner solutions based on your needs. They do this via their two assessment tools:
- Server assessment tool – To optimise the assessment process and discover servers in your compute estate, choose to deploy the Azure Migrate Appliance to the on-premises environment or use CMDB information to import into Azure Migrate via CSV. This enables you to identify the servers and workloads you need to assess.
- Database assessment tool – Similar to the server assessment tool, the database assessment tool can assess which operating system each server is running and identifies any issues or blockers for migration.
What can you migrate?
Azure Migrate helps you migrate a wide range of different workloads, server types and databases to Azure. These include:
- Windows and Linux servers
- SQL and non-SQL databases
- Web apps
- Virtual desktop infrastructure
Review migration activity in one place
Ensure more successful migrations by using Azure Migrate to track your efforts from start to finish. Starting with assessment details, the service delivers insights into your environment and dependencies. It continues tracking throughout the migration process—adding information from all tools in use. Simply access any tool from within Azure Migrate to keep all migration project details in one place.
- Visualise progress with a dashboard across discovery, assessment, and migration phases.
- Centralise data and insights for specific migration projects.
- Create migration projects for different business areas.
- Create and store multiple assessments for server groups.
Azure Migrate is a great solution for organisations looking at cloud migration. Not only is it a free tool to help with the migration, it can also be a useful solution during the planning and assessment phase. Discover our Azure migration services here or contact us for more information.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817790.98/warc/CC-MAIN-20240421163736-20240421193736-00617.warc.gz
|
CC-MAIN-2024-18
| 3,010 | 23 |
https://www.jobswipe.net/exit/?jobHash=B1B58EAE4A332A77F5C607024EF19C8A
|
code
|
ADG Tech is seeking Senior Solution Architect/DevSecOps Architects for Federal projects.
o Shall have a minimum of ten (10) years of experience in the Information Technology field focusing on development projects, DevSecOps and technical
o Shall possess strong architecture & design experience, including at least three (3) years of experience deploying enterprise applications in AWS.
o Shall possess expertise in large scale, high performance enterprise big data application deployment and solution architecture on complex heterogeneous
environments in AWS.
o Shall have, at a minimum, a Bachelor*s degree in Computer Science, Information Technology Management or Engineering, or other comparable.
Experience with Microsoft /Container Technology will be highly preferred.
Candidates with DHS/USCIS clearance highly preferred
Technology Stack: AWS Cloud, Akamai, Amazon Linux, Apache ActiveMQ, Apache Commons Libs, Apache JMeter, Chef, Apache Tomcat, CentOS, Chaos Monkey, Cucumber /Jasmine/Selenium, DeQue FireEyes,Jenkins,Docker, Fortify, Git/Enterprise GitHub, Hibernate4, iText, Liquibase,Jackson, Maven, Java, Java mail, JAXB, Hibernate4, Jira, Junit, Open shift, snap, Open shift, SAS, Ruby, Rails, Oracle & PLSWL, PostgreSQL, Site Mesh, Python/Anaconda, Nexus, Spring Framework, SOAP UI, Ubuntu, Windows Server, Bouncy Castle (FIPS)Kafka, Spark/Scala, Arti factory, Hashi Corp Terraform
Technology Stack :
Interested candidates may please respond with resume and contact details.
Role: DevOps Architect
Apply for this job now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479159.2/warc/CC-MAIN-20190215204316-20190215230316-00293.warc.gz
|
CC-MAIN-2019-09
| 1,534 | 13 |
https://medadvisor.com.au/HowItWorks/ScriptHistory
|
code
|
Browse details of all your prescriptions and past repeat dispenses.
MedAdvisor provides you with a complete view of all your current and historic prescriptions for each prescription medication. The Scripts & Repeats area of Medicine Details provides a complete picture of your current scripts (i.e. those that can still be dispensed from).
For each script you'll see:
- the medication name, type and strength
- the number of repeats left
- the script expiry date
- the script status: New (unused), Active (has been used), Finished (no repeats left), Expired, etc.
The same information can also be seen for your historic scripts, i.e. old scripts that have been completely consumed or have expired. Just tap Load Prescription History.
Each script entry can be expanded to show the individual dispense instances, including the location, date and quantity dispensed by your pharmacy every time you’ve had that script filled.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00517.warc.gz
|
CC-MAIN-2022-27
| 923 | 9 |
https://security.stackexchange.com/questions/223934/is-someone-trying-to-hack-into-my-server
|
code
|
I'm a cyber security student and don't do server stuffs on regular basis, I was just wondering how to check SSH login logs and found that it can be checked using
sudo cat /var/log/auth.log and checked on my server and there were lots of
Failed password for root from [IP] This is a newly installed remote server there's no way I could have logged so many times.
Then I read it carefully it says
Failed password for root from [IP] I was like what? Its for
root? I have created my separate user account and except the first time when I had to create a new user account I have never touch
root user. It seems to me someone is trying his luck by bruteforcing for credentials. Still, I wanted to ask my seniors here what they think?
I've nothing running on this server not even apache, nginx etc. Only SSH port is open and AFAIK there's no recent SSH vulnerability in public knowledge.
And one more important thing I wanted to ask is, being a security student this really grabs my attention and makes me more curious to understand about this. Why would someone run scripts to bruteforce and scan new servers? I mean what would he get, there's barely anything in my case. Initially, I thought maybe he wants to spread malware using my server but if someone has the resources to scan the entire internet he surely has resources to do that himself. Maybe he just want to add servers into his list of compromised servers and use all of them together as a botnet, so many thing going on my mind. What would he do with a new server?
EDIT: Something I realized today is, as security student I was understanding things from offensive side. Now when I have setup my server I really understand the need to know things from defensive side as a pentester. If any student reading this, I would say understand defensive side as well. I would also learn from now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506028.36/warc/CC-MAIN-20230921141907-20230921171907-00796.warc.gz
|
CC-MAIN-2023-40
| 1,843 | 10 |
https://create.roblox.com/docs/education/build-it-play-it-story-games/coding-a-question
|
code
|
Remember how you wrote a sentence for your story, then swapped a word out for a placeholder? It's time to give players a chance to add something to your experience.
In the script, the placeholder you made will be a variable. In coding, variables are placeholders for information, in this case a word.
You'll start by asking players a question. Then, they'll type in an answer that gets stored in the variable.
Creating a Variable
Variables have names that tell programmers what they store. In this case, you'll create a variable called name1 for the placeholder.
Click below the dashed lines and type local name1.1-- GLOBAL VARIABLES2local storyMaker = require(script:WaitForChild("StoryMaker"))34-- Code controlling the game5local playing = true67while playing do8 storyMaker:Reset()910 -- Code story between the dashes11 -- =============================================12 local name11314 -- =============================================1516 -- Add the story variable between the parenthesis below17 storyMaker:Write()1819 -- Play again?20 playing = storyMaker:PlayAgain()21end22
Setting a Variable
Now players need to have a chance to put something inside the placeholder. To change a variable, it needs to be set to something using the = symbol.
After name1, make sure to add a space and then type =.1while playing do2 storyMaker:Reset()34 -- Code story between the dashes5 -- =============================================6 local name1 =78 -- =============================================910 -- Add the story variable between the parenthesis below11 storyMaker:Write()12end13
After the equal sign, type storyMaker:GetInput(). The code must be typed exactly as is, and capital letters must match.1while playing do2 storyMaker:Reset()34 -- Code story between the dashes5 -- =============================================6 local name1 = storyMaker:GetInput()78 -- =============================================910 -- Add the story variable between the parenthesis below11 storyMaker:Write()12end13
Typing a Question
Variables can store different types of data including small numbers, true or false values, and strings. String type variables are special because they can store whole sentences. It's easy to spot string type variables because they're always in quotation marks "like this".
The question to ask players will be a string variable.
In GetInput(), click between the parentheses. Inside type a question enclosed by quotation marks.1 -- Code story between the dashes2 -- =============================================3 local name1 = storyMaker:GetInput("What is your favorite name?")45 -- =============================================6end7
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00660.warc.gz
|
CC-MAIN-2022-49
| 2,645 | 14 |
http://packages.ubuntu.com/source/lucid/elilo
|
code
|
Source Package: elilo (3.10-1ubuntu1)
Links for elilo
Please consider filing a bug or asking a question via Launchpad before contacting the maintainer directly.
Original Maintainer (usually from Debian):
It should generally not be necessary for users to contact the original maintainer.
The following binary packages are built from this source package:
- Bootloader for systems using EFI-based firmware
Other Packages Related to elilo
- helper programs for debian/rules
- The GNU assembler, linker and binary utilities
- Library for developing EFI applications
- tool for managing templates file translations with gettext
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930109.71/warc/CC-MAIN-20150521113210-00157-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 621 | 12 |
http://www.linux.sgi.com/archives/xfs/2001-05/msg03446.html
|
code
|
At 5/15/01 04:07 AM, you wrote:
>I read the FAQ about dump, etc., but how about tar (GNU)?
>Can I count on
>a successful transfer of a tarball (*.tgz, with subdirs) on
>an xfs partition to a non-xfs
>partition on another architecture (or the same
Tar works between almost everything. You can create *.tar on SCO UNIX,
and then untar this file on HP-UX, AIX, Digital UNIX or whatever you want
(and have at least posix compliant implementation of tar). Regardless of
processor type used (ia32, Risc, PowerPC), regardless of filesystem type.
Years ago I was working in sux heavy mixed environment. No problems at all
(with tar of course :)) ). There are implementations of the tar for MS Windows
and MS DOS.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423927.54/warc/CC-MAIN-20170722082709-20170722102709-00316.warc.gz
|
CC-MAIN-2017-30
| 704 | 13 |
https://bytepointer.com/resources/old_new_thing/20050726_202_what_is_the_difference_between_wm_destroy_and_wm_ncdestroy.htm
|
code
|
|Date:||July 26, 2005 / year-entry #203|
|Summary:||There are two window messages closely-associated with window destruction, the WM_DESTROY message and the WM_NCDESTROY message. What's the difference? The difference is that the WM_DESTROY message is sent at the start of the window destruction sequence, whereas the WM_NCDESTROY message is sent at the end. This is an important distinction when you have child windows....|
The difference is that
hwnd = parent, uMsg = WM_DESTROY hwnd = child, uMsg = WM_DESTROY hwnd = child, uMsg = WM_NCDESTROY hwnd = parent, uMsg = WM_NCDESTROY
Notice that the parent receives the
Having two destruction messages, one sent top-down and the other
bottom-up, means that you can perform clean-up appropriate to
a particular model when handling the corresponding message.
If there is something that must be cleaned up top-down, then you can
These two destruction messages are paired with the analogous
What's this "absence of weirdness" I keep alluding to? We'll look at that next time.
[Typos corrected, 9:30am]
<-- Back to Old New Thing Archive Index
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00759.warc.gz
|
CC-MAIN-2023-50
| 1,084 | 13 |
https://frontiersi.com.au/news-events/?category=events&event=frontiersi-conference
|
code
|
Earth Science Week 2019 will be celebrated during the week of October 13-19, 2019 with the theme: Geoscience is for everyone!
Aligned with ongoing efforts to enhance diversity, equity, inclusion, and accessibility in the sciences, this theme acknowledges both the potential and the importance of the geosciences in the lives of all people.
Providing geospatial capacity for the indigenous community
The Indigenous Mapping Workshop (IMW) and its strategic partners are dedicated to the development and advancement of culturally appropriate and inclusive geospatial technologies for Indigenous leadership, agencies, and communities to support Indigenous rights and interests.
Australia’s coastline was subject to over 30 tsunamis since 1950, most generated by distant earthquakes in the Pacific and Indian Oceans. Some induced hazardous marine currents and inundation. Should we expect larger tsunamis in future? Where? How big? How often?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00021.warc.gz
|
CC-MAIN-2019-43
| 939 | 5 |
https://www.boa.nl/en/cv
|
code
|
|Name:||Drs. Ing. Geert Boeve|
|Aug. 1969 - Aug. 1975||Havo |
Goois Lyceum, Bussum
|Aug. 1975 - Sept. 1979||HTS navigation (Ships officer merchant navy) |
Hogere Zeevaartschool Amsterdam
|Sept. 1979||Certificate third officer|
|June 1982||Certificate second officer|
|Sept. 1982||General certificate radiotelephony|
|Sept. 1983 - April 1989||Computer Science (Drs.) |
University of Amsterdam
|7 - 24 November 1987||Study journey to Japan. |
Made visits to various universities and companies like the University of Tokyo, FANUC, ETL, AIST, ICOT, MITI, NTT, NEC, NISSAN, Sharp and Kyoto University.
|Recent technologies:|| |
|Oct. 1992 - Up till now||As independent developer working with the name Boeve Automatisering. |
K.v.K. 33241732 in Amsterdam.
|May 2020 – Present||Development of simple games based on JavaFX and GraalVM with the names CoronaKiller, VirusMaze and HappySwiper. Thanks to Java 14, JavaFX and GraalVM are this games available for iOS (iPhone and iPad), Android, Mac OSX, Windows and Linux. For more information and free demos, see www.boa.nl.|
|November 2015 – Present||Designing, setting up and developing a new robot website called RoboBuddy. Website can be found online as www.robobuddy.com. Website was created with Liferay Portal (Java) software and standard Java JSR 168/286 portlets. The site is located on a cloud server behind an Apache web server with Tomcat 7 and is connected to a MySQL database.|
|July 2014 - October 2015||As Java programmer working at ECI 'The Book Club' in Houten. Development of software for reporting at the Commissariaat van de Media website of club prices of books offered by ECI. Furthermore, software created for the production of letters in PDF using the JasperReports library. Various business reports created with JasperReports and JasperServer. Large number of web crawlers made with the jsoup library for the collection of book reviews on various media and publishing sites (eg. De Volkskrant, The Bezige Bij, etc.). Tools etc. used: Oracle 11, jCatalog, Eclipse, Apache POI (Excel), jsoup, Jasper Server Jasper Studio, pgAdmin and SQL Developer.|
|January 2009 - December 2019||Maintenance and extensions created to the AmstelTrade system of Amstel Securities (see below). Added screens and software for handling of option transactions. Added software to handle automatically the daily MiFID rapport's to the financial authorities like AFM and FSA. Ajax technology with the DoJo toolkit added to a number of webpages for improved user interaction. Pages added on bases of the GateIn Portal of JBoss. (See also below March 2004).|
The Android version is in development.
|January 2009||Emeal/PizzaOnline taken over by thuisbezorgd.nl. This company is taking care of further maintenance and development.|
|May 2008 - January 2009||Maintenance and updates to the Emeal/PizzaOnline website. Maintenance and small updates to the AmstelTrade system of Amstel Securities (see below).|
|January 2008 - April 2008||Made a mobile (WAP 2.0) version of the Emeal/PizzaOnline website with domain name emeal.mobi. |
This sub version of the Emeal website is suitable for smart phones, pda, Apple iPhone etc.
WAP 2.0 uses as markup XHTML-MP.
|May 2007 - December 2007||For Amstel Securities the AmstelTrade system (see below) extended and made suitable to conform to the European MiFID rules. |
Part of this extension are daily MiFID reports send the AFM (Authoriteit Financiële Markten) en FSA (Financial Services Authority).
This reports are xml files daily automatically uploaded by the AmstelTrade system by sftp to this financial authorities.
|December 2006 - April 2007||Via Atos Origin (and InfoMotion) working at the Delta Lloyd in Amsterdam. |
Did some work on a Confluence wiki application used at the department 'Asset Management'. The wiki was used as internal and external information and documentation system.
Created some new plugins in Java en made a new information structure for the internal wiki version.
Next to this work did programming work in Java at the pensions department.
Helped creating a web interface as part of the CUC (Centrale UPO Component) Manager for maintaining and checking of the print commands of UPO's (Uniform Pensioen Overzicht) to the central print department.
This web interface was based on the Spring library. Development at the Delta Lloyd was done with WebSphere tools.
|April 2006 - November 2006||For Entrepreneur Consultancy B.V. in 's-Graveland changes and improvements made to there online entrepreneur test (http://www.ondernemerstest.nl). |
With this test a person can test if he or she is capable for the job as entrepreneur.
The test is written in Java with MySQL as database and Jboss as enterprise server.
|Mar. 2006 - Mar. 2006||Work at Lost Boys B.V. in updating the "Extra Fris" website (http://www.extrafris.nl) for a client of them. |
Programming work in Java with struts and iBates as main libraries. Also using Eclipse and MS SQL server.
|Mar. 2004 - December 2019||In cooperation with Vriesde IT from Amsterdam building a transaction system for Amstel Securities N.V. in Amsterdam. |
This enterprise critical transaction system is used for handling and management of stock exchange transactions. The application is web-based and used worldwide by the various branches of Amstel Securities, like Amsterdam, London, Toronto, Zurich, Geneva, Singapore and Tokyo. The system has a connection to banks with SWIFT for the handling of the financial transactions. The application is further used the monthly consolidation and maintenance of the system.
The application is build on top of the JBoss J2EE Enterprise server and is using as data-store the MySQL database system. For the creation of the application are various open source libraries used like Struts, Jasperreports, poi and javamail. The production server is running SuSe Linux.
|Dec. 2003 - Feb. 2004||Development of software for internal use by Boeve Automatisering. This java J2EE software is build on top of the JBoss Application server. For this development is used JBuilder, jsp, xml, xsl, Struts, Ant, Tomcat, Velocity and various other open source libraries. |
The following systems are build (or are still in development):
- An invoice management system (web application), it generates the invoices as pdf (by using xml, xsl-fo and fop). The system is for internal use with a connection to the Emeal system (see below) for the automatic generation on the monthly invoices.
- A portal for weblog (moblog) applications (still in development) to be used on the web, i-mode, wap and for PDA's.
|Sept. 2001 - Nov. 2002||Working for Trinity Security B.V. in Mijdrecht. (http://ww.trinitybv.nl) |
Software development for the Palm 505 in C. The Palm is used as userinterface for the MIES system. MIES is a navigation and information system used in cars (see http://www.mies.nl).
Palm version converted to Windows CE and Pocket PC.
|Jan. 2001 - Sept. 2001||Development of software for PDA's with the Palm OS. |
Software written in Java and C.
|Febr. 2000 - Dec. 2000||Working as system/intranet programmer in Java and Perl at KPN VAS Amsterdam (http://dedicated.kpnhosting.nl) on a Solaris system |
KPN VAS is taking care of the dedicated hosting for a great number of companies like NOS, Buhrmann, the ministries of economic affairs and foreign affairs etc. For support of the systemmanagement, project administration and calculation of the invoice data, build an intranet in serversite Java (servlet) on top of a MySQL database. Also created a few Perl scripts to perform systemmanagement tasks with data from the database.
|Febr. 1999 - Dec.1999||Working as system/website programmer (mainly in Perl) at Internet Connect Centre B.V.in Diemen on HP 9000 unix and Windows NT systems. |
ICC is a webhosting company (see: http://www.iconnect.nl) for the top 10 Dutch companies like Dutch Railway, Albert Hein, KPMG etc. ICC is a daughter company of UUnet. Work was mainly on intranet software for system maintenance, project administration and invoice calculation.
|Nov. 1998 - Up till now||Further development of the PizzaOnline/Emeal website (see: http://www.emeal.nl) for on-line ordering of pizza's and other meals. The software for this site is completely rewritten in server site Java (servlets).The Java software is via a JDBC interface connected to a MySQL database server. The site has a build in SET payment system in combination with software from InterPay B.V. (I-Pay). The server is a Linux (RedHat) system with an Apache webserver.|
|Sept. 1997 - Nov. 1998||Software development for IMIS B.V. in Lelystad. (http://www.imis.nl) |
The Imis software is written in C and used in controlrooms. The software is displaying a map of the Netherlands on witch additional information can be displayed. The system can be connected to external systems with a serial link or TCP/IP connection and is running on Windows NT or Silicon Graphics O2 systems. Did work on applications for Securop, the controlrooms of RWS in Utrecht and Planken Wambuis (connecting IMIS system to camera systems of Philips). Also did work on the application for the Floating Car Data test project of RWS.
|Jan. 1997||Development Macintosh software in C++ for 'newMetropolis' science and technology centre in Amsterdam (http://www.newmetropolis.nl).|
|Nov. 1995 - Up till now||Setting up and maintaining an internet site (on a Linux system) with domain name 'boa.nl'. |
Here you can find the following World Wide Web sites:
Information about Boeve Automatisering.
On-line pizza ordering by various pizza companies.
On-line meal ordering by various catering companies.
Information about the Easy*Disc products.
|Nov. 1995 - Nov 1999||In co-operation with 'Regie Postma & Postma' from Amsterdam: |
W3 ezine about domestic items.
This site is not existing anymore.
Motorsite for the 'Landelijk Overleg Orgaan Toerklubs' (LOOT) a Dutch union of 160 motorcycleclubs.
Shopping site around I-Pay. With various digishops like the Dierenvriend, NBAT etc.
This site is not existing anymore.
A for the general public closed extranet used by the Rosenthal dealers to sell there superfluous or not to sell articles. This selling system is based an a anonymous auction system where dealers can bid on each other items.
The CGI software is written in Java and is using a mSQL database.
This site is not existing anymore.
These sites have a lot of cgi software written in C and Java. At the PizzaOnline, Neerland and OmniPlazA sites you can pay with the I-Pay system of Interpay B.V.
|March 1993 - Aug. 1997||In co-operation with Invers B.V. in Den Haag (http://www.easy-disc.nl) |
development of PC (MS-DOS) software for the home (consumer) marked. This software collection with the name 'Easy*Disc' is being sold via magazine retail shop and via the well-known computer shops.
This software is written in C++.
|Oct. 1992 - Feb. 1993||FROG Systems B.V., Utrecht (as freelancer) |
Analyse and design of the statistics module of SuperFROG. Created a multimedia demo for maintenance and problem diagnosis of the FROG vehicles.
|Jan. 1991 - Oct. 1992||FROG Systems B.V., Utrecht (http://www.frog.nl)|
|July 1987 - Dec. 1990||Industrial Contractors Holland B.V., Utrecht |
Research on obstacle detection and avoiding for the FROG navigation system. FROG (Free Ranging on Grid) is a navigation and controlling system for automated vehicles (AGV systems). Design and development of an Apple Macintosh user interface for the supervisory system SuperFROG. Installing SuperFROG systems and FROG vehicles at several customer locations like at Apple Computer in Fremont. Customer support provided to various customers in the USA and Singapore. Training provided to various customers in the use of the FROG en SuperFROG systems, the training was both on-site as intern. Worked several times as stand crew on the Hannover Messe (Computer Faire). Design and development of an AppleTalk network interface for SuperFROG. System maintenance provided for Macintosh, UNIX and MS-DOS systems. Co-operated with the development of the navigation system for the unmanned container vehicles of ECT (Container Terminal) in Rotterdam. Made a complete redesign of the SuperFROG system and developed several major modules. Project management performed on the SuperFROG project. Did research on routing algorithms for unmanned vehicles as part of a Spin project. Given advise and support during sales activities with various customers as HP in Böblingen (Germany), HP in Singapore and Apple Computer in Cork. Various specifications written for several customers like HP in Singapore. Analyse and design of the transport control (job dispatcher) module of SuperFROG. Experience with UNIX (like Sun, HP, A/UX), Apple Macintosh and MS-DOS. Knowledge of the languages C, C++, Pascal, Basic and Fortran.
|Sept. 1986 - June 1987||Rijnhaave Automatisering B.V., Leiden |
Call Computer Services
Holland Data Groep, Amsterdam
Software development for option and stock market software, portfolio management, on-line price and ordering systems. Programming was performed in BASIC on MS-DOS computers.
|Oct. 1982 - March 1983||First officer|
|Oct. 1982 - March 1983||Second officer|
|Sept. 1979 - June 1980||Aspirant officer |
Smit Internationale, Zeesleep- en bergingsbedrijf B.V., Rotterdam.
Active as mate on board of various large and smaller ocean going tugs involved with towing and salvage work.
|June 1977 - May 1978||Apprentice officer |
Nedlloyd Lijnen B.V., Rotterdam.
Stage on board of a conventional freighter. Performing work on cargo, navigation and maintenance of the ship.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00013.warc.gz
|
CC-MAIN-2023-50
| 13,586 | 95 |
http://aero.stanford.edu/PlanOpt.html
|
code
|
Several computational tools for aircraft design are being developed in this program. These include improved methods for aircraft synthesis and optimization such as S. Wakayama's thesis work on wing planform optimization, described below.
A method is being developed for optimizing wing planform shapes for subsonic transport aircraft. Aerodynamic and structural analyses are integrated with a sequential quadratic programming optimizer to yield successful wing planform optimization. The aerodynamic drag analysis considers induced, profile, and compressibility drag. An advanced structural analysis was developed to evaluate wing weight and stiffness through consideration of bending strength and buckling constraints at multiple design conditions. Effects of static aeroelasticity and bending relief due to fuel inertia are also evaluated in the structural analysis. Maximum lift is calculated through a critical section analysis, with a correction for induced camber developed to handle flapped wings.
The method generates realistic planform designs and has been used to study non-planar wing tips and effects of designing for natural laminar flow on a business jet. Further development has extended the method to wing-tail combinations, with constraints on trim, static margin, and gear placement.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689661.35/warc/CC-MAIN-20170923123156-20170923143156-00193.warc.gz
|
CC-MAIN-2017-39
| 1,301 | 3 |
https://nips.cc/virtual/2021/22017
|
code
|
Oral (10 min)
- Policy Mirror Descent for Regularized RL: A Generalized Framework with Linear Convergence, Wenhao Zhan
Spotlights (5 min)
- Integer Programming Approaches To Subspace Clustering With Missing Data, Akhilesh Soni
- DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization, Boyue Li
- Better Linear Rates for SGD with Data Shuffling, Grigory Malinovsky
There will be a Q&A in the last 5 minutes for all speakers. Abstracts for the talks are below the schedule.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510810.46/warc/CC-MAIN-20231001073649-20231001103649-00499.warc.gz
|
CC-MAIN-2023-40
| 522 | 7 |
http://www.promiseangels.com/yury-magda/visual-c-optimization-with-assembly-code/SKU/258922
|
code
|
Item description for Visual C++ Optimization with Assembly Code by Yury Magda...
Describing how the Assembly language can be used to develop highly effective C++ applications, this guide covers the development of 32-bit applications for Windows. Areas of focus include optimizing high-level logical structures, creating effective mathematical algorithms, and working with strings and arrays. Code optimization is considered for the Intel platform, taking into account features of the latest models of Intel Pentium processors and how using Assembly code in C++ applications can improve application processing. The use of an assembler to optimize C++ applications is examined in two ways, by developing and compiling Assembly modules that can be linked with the main program written in C++ and using the built-in assembler. Microsoft Visual C++ .Net 2003 is explored as a programming tool, and both the MASM 6.14 and IA-32 assembler compilers, which are used to compile source modules, are considered.
Promise Angels is dedicated to bringing you great books at great prices. Whether you read for entertainment, to learn, or for literacy - you will find what you want at promiseangels.com!
Est. Packaging Dimensions: Length: 9.06" Width: 7.4" Height: 0.94" Weight: 1.76 lbs.
Release Date May 1, 2004
Publisher A-List Publishing
ISBN 193176932X ISBN13 9781931769327
Reviews - What do customers think about Visual C++ Optimization with Assembly Code?
Don't bother Sep 22, 2007
This book is so bad, I don't know where to start: 1. Riddled with typos 2. Keeps using _beginthread (instead of _beginthreadex) 3. Better information available in online docs.
More than just a language barrier. Mar 29, 2005
Do yourself a favour and read the introduction, or really any part of it, before plonking down your hard-earned money. Thank God I could return it.
Helpful book Feb 7, 2005
The book delivers what the title promises: how to combine Visual C++ with assembler. Each possible combination of calls (C++ -> assembler, assembler -> C++) will be explained in great detail. Examples are kept simple which helps the reader to not loose survey what's the point to be explained in that example. You should not expect to get much instructions how to write the fastest assembly code possible. The book gives only a couple of hints in chapter 1 ("The existing loop commands ... slow down the overall performance of the program and are indeed an anachronism to the modern processor models."). It doesn't prevent the author to use that loop command in later examples anyway. Sometimes I also missed explanations about assembly commands and MASM directives used in the examples. If you completely want to understand what's going on you have to have additional reference material (Intel, MASM) at hand. Conclusion: It's a great book about all aspects of interaction between Visual C++ and assembly language. If you want to know how to get out most of your assembly code you should use a different book (e.g. "Inner Loops" by Rick Booth)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00447-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 3,014 | 14 |
https://ebookreading.net/view/book/Bluetooth+Low+Energy%3A+The+Developer%E2%80%99s+Handbook-EB9780132888394_20.html
|
code
|
Anyone who considers protocol unimportant has never dealt with a cat.
—Robert A. Heinlein
The Logical Link Control and Adaptation Protocol (L2CAP) is a protocol multiplexing layer that enables Bluetooth low energy to multiplex three different channels. It also enables segmentation and reassembly of packets that are larger than the underlying radio can deliver. On a Bluetooth classic radio, the L2CAP layer also performs many additional, complicated operations.
One of the basic concepts for Bluetooth low energy is a radically different connectionless model; this means that you only have to create a connection when you need to send data, and the device can always disconnect at any time. To achieve this, the connectionless model must be extended up to the L2CAP layer; thus, only fixed channels are supported. Fixed channels don’t have any configuration parameters to negotiate, and they exist as soon as the lower layers have established a connection; consequently, there is no time wasted waiting for the channel to be created.
When Bluetooth low energy was first designed, it did not use L2CAP. Previously, a Protocol Adaptation Layer (PAL), was designed to be a highly optimized, and severely restrictive multiplexer between two protocols. The PAL looked like the Attribute Protocol and a signaling layer. This was bad for two reasons: flexibility and legacy implementations.
The PAL could only support two types of packet: a single higher-layer protocol or its own signaling layer. There was no segmentation or reassembly, nor was there the ability to separate different protocols. One of the basic design tenets of protocol design is that you layer protocols; each protocol is self-contained. This means that is possible to design, for example, the Security Manager with all the other parts of the system. At the point of implementation, each protocol is a separate layer that can be individually tested. Therefore, the PAL broke this simple rule. The part that killed this approach, however, was not the design, but the lack of flexibility.
Most multiplexing layers perform segmentation and reassembly. This means that a large protocol packet from a higher layer can be segmented into multiple smaller packets appropriately labeled so that they can be transmitted through a system that has packet length restrictions. A good example of this is an ATM network for which each packet is restricted to just a few bytes of data, allowing the rapid switching between different streams. This facilitates the delivery of low-latency audio traffic and bulk data at the same time.
The Host Controller Interface (HCI) supports segmentation and reassembly by using the “start” and “continuation” bits on each data packet. However, the PAL didn’t support such a basic feature. This meant that the maximum size of any application data in this layer would be limited to just 24 bytes of data. This severe restriction was the eventual downfall of the PAL.
When L2CAP was proposed as an alternative, the group designing Bluetooth low energy split down the middle: the companies that already had existing Bluetooth implementations and the companies that didn’t. In some standards bodies, this would have meant many months of acrimonious voting to attempt to force division; this is also typically associated with disruptive political actions like trying to stuff the room with voting members to try to sway the vote one way or the other. In Bluetooth, this is not the standard approach. Instead, a paper on the various costs of each approach was written showing the cost of adding L2CAP. The deciding argument was that the battery life of a device that reported something once a second was reduced from 3.3 years to 3.2 years. So L2CAP did reduce the battery life of the device, but compared with the 7 bytes before the payload of the packet, and the 3 bytes of cyclic redundancy check (CRC) on every single packet whether it was carrying data or not, it was not a significant reduction. This is another example of the attention to detail that the designers of Bluetooth low energy took to consider the system design issues of all the decisions.
L2CAP gives you the ability to plug Bluetooth low energy into an existing L2CAP implementation. It also supports the full segmentation and reassembly from Blue-tooth classic, effectively allowing packet sizes of up to 65,535 bytes in length; even though there are no protocols that support this packet size that can be run on Blue-tooth low energy. L2CAP also retains the channel model that Bluetooth classic uses.
In Bluetooth classic, the channels come in two different flavors: fixed and connection-oriented. A fixed channel exists for as long as the two devices are connected. These are used primarily for signaling channels, either for basic L2CAP signaling commands or, in v2.0 and later, an Alternate MAC/PHY signaling channel. Connection-oriented channels can be created at any time by sending a few L2CAP signaling commands to a peer device.
In Bluetooth classic, connection-oriented channels allow data from an individual pair of applications to be considered as separate from the data of other channels. For example, even though connection-oriented channels can add additional data integrity checking, they might have a different flow specification, or they might be a streaming channel rather than a best-effort channel. Connection-oriented channels are great when you have a complex system that has multiple, varied types of data being transmitted at the same time. For example, a phone and a car can have multiple different protocols running at the same time: one stream for the high-quality audio from the phone to the car stereo; one stream for the hands-free operation; another stream for the phone book; and perhaps another stream for an Internet connection.
Opening connection-oriented channels can be a complex operation. Each L2CAP channel has a large number of configuration parameters; seven in the latest specification. This means that in addition to the two messages that have to be exchanged to request a connection to be established, each of the configuration parameters has to be agreed upon before any data is allowed to be sent. This could be fairly quick—just another four messages—or it could be a fairly lengthy operation of proposed values and counter proposals. The other complexity that connection-oriented channels bring is that once they are all configured and data is flowing, a device can renegotiate different parameters. All this increases the latency of the data connection at the expense of more flexibility. For most Bluetooth classic protocols and profiles, this is an acceptable cost because these connections are kept alive for long periods of time.
In L2CAP, there is a simple concept of a channel. L2CAP, after all, is a multiplexing layer, and to do this, it has multiple channels. A channel is a single sequence of packets, from and to a single pair of services on a single device. Between two devices, there can be multiple channels active at the same time.
In Bluetooth low energy, only fixed channels are supported. A fixed channel is a channel that exists as soon as the two devices are connected; there is no configuration requirement for fixed channels. The future-proofed flexibility still exists to add connection-oriented channels if they are considered necessary.
Table 9–1 presents the L2CAP channel identifiers. Each channel identifier in Blue-tooth is a 16-bit number. The channel identifier 0x0000 is reserved and should never be used. Channel identifier 0x0001 is a fixed channel for Bluetooth classic signaling.
Channel identifier 0x0002 is a fixed channel used for “connectionless data,” although there is no profile that currently uses this. Channel identifier 0x0003 is used for the Alternate MAC/PHY protocol when sending data at high speed is required. Channel identifier 0x003F is used for a test channel for the Alternate MAC/PHY controllers.
There are three Bluetooth low energy channels: Channel identifier 0x0004 is used for the Attribute Protocol (for more information on this, go to Chapter 10, Attributes); Channel identifier 0x0005 is used for the Bluetooth low energy signaling channel; Channel identifier 0x0006 is used for the Security Manager (for more information on this, go to Chapter 11, Security). All the other channel identifiers from 0x0007 to 0x003E are reserved, and channel identifiers from 0x0040 to 0xFFFF can be used for connection-oriented channels.
Each L2CAP packet contains a 32-bit header followed by its payload. It is assumed that segmentation and reassembly is used; thus, the length of the packet must be included in the packet header so that the end of the packet can be determined. The segmentation and reassembly scheme used requires the marking of packets over the HCI interface (for more information on this, go to Chapter 8, The Host Controller Interface) as well as within each transmitted packet as either a start or continuation packet. There is no way to denote that a given L2CAP packet segment is the end of the current packet. This means that the only way to determine if the current packet is complete is to either send a new packet, assuming that one is ready to be sent, or to include the packet length in the very first packet sent.
As shown in Figure 9–1, the header contains a 2-byte length field followed by the 2-byte channel identifier. This is followed by length bytes of information payload. In Bluetooth classic, the information payload can also include additional headers and information, but in Bluetooth low energy, there are no other structures of significance at the L2CAP layer.
For all Bluetooth low energy channels, the information payload starts with a Maximum Transmission Unit (MTU) size of 23 bytes. MTU is the largest possible size for the information payload in a given L2CAP channel. This means that all Bluetooth low energy devices must support 27-byte packets over the air—4 bytes of the L2CAP header and 23 bytes for the information payload.
The LE signaling channel is used for signaling at the host level. As illustrated in Figure 9–2, each LE signaling channel packet contains a single opcode, followed by any parameters. The following command opcodes are supported on the LE signaling channel:
• Command Reject
• Connection Parameter Update Request
• Connection Parameter Update Response
Whenever a signaling command is sent, an identifier is included in the information payload. This identifier is just 1 byte in length and is used to match responses with requests. For example, if a request was sent with the identifier 0x35, any response that also had the same identifier 0x35 would be the response for that request. This allows multiple requests to be outstanding at the same time, with each request having a different identifier. Identifiers can’t be reused unless all other identifiers have been used. This leads implementations to use an increment operation to ensure this rule is met. There is just one exception to this: An identifier with the value 0x00 is never used. A side effect of the use of identifiers is that duplicate commands can be silently dropped. This would be useful if the command channels were unreliable, but they are always sent on a reliable bearer, so this rule is rarely invoked.
In Bluetooth low energy, because only one request has been defined, and because this request can only be sent when no other request is outstanding, the logic for identifiers is very simple.
The command reject command is used to reject any nonsupported message that was received by the device. This command is identical to the Bluetooth classic command reject command. It contains a reason code and can contain some data. The reason code can be either Command not understood or Signaling MTU exceeded.
The Command not understood reason code is used when a command was sent to the device that it does not support. This should be sent even for command codes that are not defined at the moment; this allows a device to be forward-compatible with future versions of the specifications.
The Signaling MTU exceeded reason code is used when a command is received that is longer than 23 bytes. The default MTU for the signaling channel is just 23 bytes, so if a command were received that was 24 bytes or more, the command reject would be sent in reply.
In Bluetooth classic, another reason code is defined, Invalid CID in request, but because no commands are defined that use a channel identifier in Bluetooth low energy, this reason code has never been used.
The connection parameter update request command provides the slave device with the ability to ask for the Link Layer connection parameters to be updated, as demonstrated in Figure 9–3. These parameters include how often the slave wants the master to allow the slave to transmit, the connection event interval, and often the slave wants to be able to ignore the master, the slave latency, and the supervision timeout.
This command would be used when the slave is in a connection for which it wants to modify current connection parameters. For example, the connection event interval might be too fast and therefore wasting too much power. This would not be a problem if the slave latency were reasonably high, but if this is not true, then the slave would have to listen very frequently. Sometimes this is useful, for example, when the devices are first bonding and sending many messages between one another, discovering the services and characteristics of the device. But many other times, having the ability to minimize the number of connection events when the slave has to listen is vitally important for efficient battery life.
This command is only usefully sent from the slave to the master; the master can always initiate a Link Layer connection parameter update control procedure at any time (see Section 7.10.1 in Chapter 7). If the command is sent by the master, the slave would consider it an error and would respond with a Command Reject command with the reason code Command not understood.
The slave can send this command at any time. If the master receives the message and can change the connection parameters, it will respond with a Connection Parameter Update Response with a result code set to accepted. The master will also initiate the Link Layer connection parameter update control procedure.
Of course, this is just a request, and if the master doesn’t like the parameters that the slave wanted, it can reject the request by sending a Connection Parameter Update Response with the result code set to rejected. The slave then has two options: accept that the master wants or needs the connection parameters that it is currently using, or terminate the connection. Terminating the connection might appear at first glance to be a fairly drastic approach, but if the slave would burn through its battery in a week with the current connection parameters but would last for years with its requested connection parameters, it might have only one logical choice available.
To reduce the probability of having the master reject the connection parameters from the slave, the slave can request a range of connection event intervals that would be acceptable. A well-designed slave would willingly accept a wide range of intervals. A master device might also be doing some other activities such as a low latency conversational audio connection or a high-quality audio connection and is therefore severely restricted in the range of connection intervals that it can accept. The set of intervals it can accept might be different depending on what it is currently doing, so it might not be the same as the last time the two devices connected.
Another way to increase the chance that the master will accept the connection parameters is to have a reasonably sized slave latency. The master can then choose the most suitable connection event interval, and the slave can then use a slave latency that gives it the best power consumption. For example, if the slave wants to synchronize every 600 milliseconds, it could request a connection interval range of between 100 milliseconds and 750 milliseconds, with a slave latency of 5. If the master chooses 100 milliseconds, the slave could synchronize every 6 connection events. If the master chooses 200 milliseconds, then the slave could ignore 2 out of every 3 connection events, achieving its desired synchronization interval of 600 milliseconds. If the master chooses 300 milliseconds, the slave could ignore every other connection event. If the master chooses 400 milliseconds, the slave could synchronize every 400 milliseconds.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00606.warc.gz
|
CC-MAIN-2023-14
| 16,685 | 38 |
https://onlinemanual.nikonimglib.com/z7_z6/en/09_menu_guide_05_b02.html
|
code
|
Choose whether the E button is needed for exposure compensation.
- On (auto reset): In modes P, S, and A, exposure compensation can be set by rotating the command dial not currently used for shutter speed or aperture (easy exposure compensation is not available in mode M). The setting selected using the command dial is reset when the camera turns off or the standby timer expires (exposure compensation settings selected using the E button are not reset).
- On: As above, except that the exposure compensation value selected using the command dial is not reset when the camera turns off or the standby timer expires.
- Off: Exposure compensation is set by pressing the E button and rotating the main command dial.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474594.56/warc/CC-MAIN-20240225071740-20240225101740-00357.warc.gz
|
CC-MAIN-2024-10
| 715 | 4 |
https://itknowledgeexchange.techtarget.com/it-consultant/blackberry-down-again/
|
code
|
Hi folks! For crying out loud, BlackBerry’s are down again. Just spent the last little bit troubleshooting our servers and BlackBerries to find out that it’s out of my control.
Apparently RIM has been having issues with various features in the BlackBerry network such as email, BlackBerry messenger, etc.
Like I said before, RIM really needs to get it together. How can this happen twice in less than a month? This is horrible for RIM and they really need to look at how they are providing their network services and make them more resilient.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00531.warc.gz
|
CC-MAIN-2019-51
| 546 | 3 |
https://www.microsoftpressstore.com/articles/article.aspx?p=2224060&seqNum=3
|
code
|
In this chapter, you learned how to create the overall structure of a document and how to divide it into head and body sections. You learned how to create paragraphs and how to add a page title. Here are the key points to remember from this chapter:
To specify HTML5 as the document type, type <!DOCTYPE html> at the beginning of the file.
All the HTML coding in a document (except the DOCTYPE) is enclosed within a two-sided <html> tag.
The <html> and </html> tags enclose the <head> and <body> sections.
The <head> area contains the page title (<title>) and any <meta> tags. The <body> area contains all the displayable text for the page.
Enclose each paragraph in a two-sided <p> tag. Most browsers add space between paragraphs when displaying the page.
To create a line break without starting a new paragraph, use the one-sided <br> tag.
When coding for XHTML, end one-sided tags with a space and a slash (/). The space is required for recognition in HTML, and the slash is necessary for recognition in XHTML.
Use the <title> and </title> tags to enclose the text that should display in the browser’s title bar. Place these in the <head> section of the file.
Use <meta> tags in the <head> section to indicate keywords and the document encoding language.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00666.warc.gz
|
CC-MAIN-2022-21
| 1,259 | 10 |
https://ipht.cea.fr/Phocea/Vie_des_labos/Seminaires/index.php?id=991966
|
code
|
Understanding cosmic magnetism: from the very early Universe till today
Canergie Mellon University
Mercredi 21/12/2011, 14:15
Salle Claude Itzykson, Bât. 774, Orme des Merisiers
Observations show that galaxies have magnetic fields with a component that is coherent over a large fraction of the galaxy, with defined field strength and coherence scale. Understanding the origin of these fields is one of the more challenging questions of modern astrophysics. There are currently two pictures: a bottom-up (astrophysical) one, generating the seed field on smaller scales, and a top-down (cosmological) version, generating the seed field prior to galaxy formation on scales that are now large. In my talk I will discuss briefly several relevant questions: (i) How and when was the magnetic field generated? (ii) How does it evolve during the expansion of the universe? (iii) Can the amplitude and statistical properties of this seed magnetic field explain the properties of the observed magnetic fields in large-scale structures? (iv) Is the seed magnetic field detectable through cosmological observations? And if so, (v) what are the observational constraints on such a primordial magnetic field?
Contact : ccaprini
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818835.29/warc/CC-MAIN-20240423223805-20240424013805-00758.warc.gz
|
CC-MAIN-2024-18
| 1,214 | 6 |
https://issues.jenkins.io/browse/JENKINS-28288
|
code
|
I see there are multiple .lck files that are opened and do not go away. From reading up on FileHandler, that typically indicates multiple instances of FileHandler being instantiated. They say that is typically caused by multiple JVMs writing to the same log file. However, I am only using one JVM (verified with ps).
I'm wondering if the issue is in hudson.plugins.audit_trail.LogFileAuditLogger line 21:
private transient FileHandler handler;
I'm thinkning maybe the extension creates several instances of the audit-trail logger, and therefore multiple FileHandler instances trying to write to the same log file.
Is there a reason this attribute is marked as transient? I didn't think this was a serializable object. Perhaps the solution would be to mark it as static so that multiple instances do not exist in the same container.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00477.warc.gz
|
CC-MAIN-2023-06
| 831 | 5 |
https://www.engineerjobs.com/jobsearch/viewjob/tnJ3719YnxVCYXcBs9hQInfboPpes_dlNGv1GAYpQ5GONnfqxNU88A?ak=&l=fort+worth+texas
|
code
|
With over 8,000 employees worldwide, the Microsoft Customer Services & Support (CSS) team includes support engineers, advocates, and business leaders who serve our customers in 191 countries and 46 languages, providing best-in-class support services. Our CSS team is on the front lines with our customers – Consumer through global Enterprises - ultimately empowering every person and every organization on the planet to achieve more.
It is an amazing time for our industry. Microsoft has an opportunity to be the market leader in customer support experience and we need your leadership to help us act on it. The Customer Service and Support organization is the place where the future, digital end to end customer support experiences will come to life. As an organization we seek to empower, help and guide customers on how to get the most from their technology investments while enabling our people to put customers at the center of all that we do.
The Resource Strategy and Vendor Planning (RSVP) - Demand Forecast team is seeking a senior-level data science/analytics and engineering switch-hitter to help drive forecast excellence. As the Forecasting Lead (Sr. Data Scientist), you will be responsible for partnering with internal teams to gather requirements, prototyping, architecting, implementing/updating forecasting solutions. The individual must be able to adjust to constant business change; common types of changes include new requirements, evolving strategies, and emerging technologies. This requires the ability to interact, develop, engineer, and communicate collaboratively at the highest technical levels with SMEs and others. Because the position requires extensive interaction with other groups within CSS and other Microsoft functions, you must possess strong interpersonal and communication skills while maintaining determined attitude.
About our immediate team:
We’re a highly visible team, responsible for delivering global operational demand forecasts for resource planning across the Microsoft suite of products globally. This includes everything from mining to exposing past customer-driven trends, to research leading to behavior predictions for new product launches, to learning emerging technologies required for solution automation and machine learning (ML), and understanding the business impacts of everything in-between. This is a rare opportunity to develop and use multiple skills in a fast-paced and constantly changing environment.
- Build and develop analytical models working alongside a strong team of Leads, Analysts and Architects to deliver demand forecasts for CSS and its SBUs
- Develop experimental and analytic plans for data modeling processes, use of strong baselines and ability to accurately determine cause and effect relationships
- Engage broadly with the Stakeholders / SBU / LoB leadership to frame, structure and prioritize business problems where insights can have the biggest impact
- Partner and collaborate with other teams on related deliverables, and effectively leverage others in relevant work streams.
- Create and manage standards, processes, and procedures to ensure agile delivery and consistent operations
- Design and implement changes in the forecasting systems and processes to incorporate rapidly changing business strategies and to apply advanced analytics concepts towards rich and bold insights
- Act as an evangelist and catalyst for BI & Analytics innovation
- Manage the portfolio of work including monthly governance with stakeholders
- Attract, develop, retain talent and improving productivity, efficiency, and effectiveness of the team and / or business
- 10+ years of analytical hands on experience across Regression (Linear/Logit/Gamma), K-Means/Modes., Clustering, Markov Chain, Monte Carlo, Non-Linear Time Series. Text mining, Dynamic Programming, Hypothesis testing, OR and Optimization techniques, Fraud Analytics etc.
- BS degree in Business, Mathematics, or a related business discipline. A master’s degree in Statistics/Economics or related discipline is preferable, but a candidate with strong analytical background in problems relating to Big Data and Analytics is also acceptable
- Strong forecasting experience required with an emphasis in Customer Support
- Proven track record of delivering highly scalable and reliable systems through multiple cycles
- Experience on Microsoft BI stack and exposure to Big Data, machine learning and data mining, large scale computing systems like COSMOS, Hadoop, MapReduce and other big data technologies including experience in SQL, R, Python, SAS, or similar languages.
- Advanced Excel skills (Pivots, dimensional modeling, linking to external data sources)
- Combination of deep technical skills and business savvy to interface with all levels and disciplines within our organization
- Excellent communications and interpersonal skills. Ability to convince other strong personalities of their ideas and work with partners in problem solving.
- Experience with presenting findings with executive audiences
- Ability to synthesize complex issues/scenarios into easy-to-understand concepts
- Attention to detail with self-discipline and a drive for results.
- Demonstrated ability to work in ambiguous situations and across organizational boundaries.
- Extensive problem-solving skills, self-confidence, and strong leadership qualities
- Experience working in a global team or in 24x7 service operations
- Knowledge of cloud services, such as Windows Azure or Amazon Web Services
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.
Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890028.58/warc/CC-MAIN-20201025212948-20201026002948-00036.warc.gz
|
CC-MAIN-2020-45
| 6,456 | 31 |
https://escholarship.org/uc/item/9nn4m6rq
|
code
|
Packet-Based Power Allocation for Forward Link Data Traffic
Published Web Locationhttp://ieeexplore.ieee.org/xpls/abs_all.jsp?isnumber=4290009&arnumber=4290031
We consider the allocation of power across forward-link packets in a wireless data network. The packets arrive according to a random (Poisson) process, and have fixed length so that the data rate for a given packet is determined by the assigned power and the channel gain to the designated user. Each user's service preferences are specified by a utility function that depends on the received data rate. The objective is to determine a power assignment policy that maximizes the time-averaged utility rate, subject to a constraint on the probability that the total power exceeds a limit (corresponding to an outage). For a large, heavily loaded network, we introduce a Gaussian approximation for the total transmitted power, which is used to decompose the power constraint into three more tractable constraints. We present a solution to the modified optimization problem that is a combination of admission control and pricing. The optimal trade-off between these approaches is characterized. Numerical examples illustrate the achievable utility rate and power allocation as a function of the packet arrival rate.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00826.warc.gz
|
CC-MAIN-2023-06
| 1,272 | 3 |
https://community.smartthings.com/t/ge-outdoor-switch-does-not-reconnect-after-hub-is-inactive/97429
|
code
|
I have a v2 hub, a GE indoor plug-in switch, and a GE outdoor plug-in switch. They both paired successfully and I can get both to work correctly. However, after the hub goes inactive for a period of time (router reboots, internet lost, hub update, etc) the outdoor switch does not come back online until it is toggled manually. This issue does not exist with the indoor switch.
I need to figure out how to get the outdoor switch to come back without a manual reset. It’s under my deck and in a very inconvenient location to physically access every time the hub goes inactive.
I do note that the indoor switch shows in the IDE with execution location of local, where the outdoor switch is the cloud. I also noted that there were some sporadic outages reported today but I have seen the issue before.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401585213.82/warc/CC-MAIN-20200928041630-20200928071630-00550.warc.gz
|
CC-MAIN-2020-40
| 800 | 3 |
http://www.linuxquestions.org/questions/linux-distributions-5/debian-slackware-vs-gentoo-which-would-you-suggest-166064/
|
code
|
Currently I am running Gentoo 1.4 but am not realy satisfied with it. For the most part i'm not all that thrilled with the distro its self but i'm in love with portage (I've used FreeBSD too and also like Ports). I'm willing to change though apt looks pretty good but I still like compiling things from source. I have narrowed my search down to Slackware and Debian. I like to compile my packages from source. Debian does give me some ability to do this with apt but from other reading I have done it says it won't handle dependencies unless I use pre-compiled binaries. Which of these distro's would you recomend?
If my hardware influences your recomendation here it is. I own an IBM eServer xSeries 220. It has a single pentium III 933Mhz (hoping to match it or get dual 1.0Ghz ones some time in the distant future). 256Mb of RAM (1Gb on the way). Pair of SCSI 160 HDDs that are 9.1Gb (10k RPM drives
) and another pair which are 18Gb (only 7.2k RPM drives
). The machine also has an onboard NIC and I added another 3Com one.
Just a little more about what I'm looking for. I want a fast distrobution that is also stable. I am willing to sacrifice some speed for more stability and security. This machine is going to be a server but also has to be able to double as my regular machine. I don't do anything fancy with my stuff. Just browse the net for more linux stuff etc..., write reports for school, nothing too demanding.
So what do recomend?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00437-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 1,446 | 6 |
http://www.science20.com/satellite_diaries/blog/going_space_cheaper_me
|
code
|
Just how much would you pay to go into space? $12000 for a satellite plus launch, like me? Or perhaps... $300 to build a high-altitude balloon camera?
Or, if $300 is too high, how about getting a couple of high school kids to do it for half that? Their 99EU ($144) high altitude balloon is a great achievement in engineering, science, cost reduction, and learning.
Their hardware specs are, alas, not in the article, but some MIT students replicated their work at the same $150 price point.
Alex, the daytime astronomer
Track The Satellite Diaries via RSS feed and Twitter @skyday
Going to space cheaper than me
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00118-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 611 | 6 |
http://www.kidzworld.com/article/28594-piano-dust-buster-20-preview
|
code
|
I would have a mega charizard.
about 18 hours
You know what's funny. That your looking to make a video game, and you come to an underage kids website. That's more funny than the facebook ad, to be honest ;-;
I'm going to start a development group for a possible video game I'd like to help develop.
The project code name is Ivolve. It is going to be a 3D environment simulator.
It is a relatively big commitment, so if your interested, you can reply to this post AFTER 21 JANUARY (AEDT).
There are several position available.
Head of programming
Public relations manager
More roles to come.
We will communicate through Slack and once you have joined, I'll give you instructions on how to get onto Slack.
P.S it's pretty funny that I'm underage an this is a kids social network yet there is always a Facebook ad at the top of my screen.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00456-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 835 | 12 |
http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html
|
code
|
Updated: Modified embedded SQL queries to use parameters where appropriate. Results updated.
I've been playing with Entity Framework recently, and noticed that it seemed to be much slower than LINQ to SQL. I ran some tests, and sure enough, I was right. The numbers are interesting:
The code is available here if you want to run these tests yourself.
The structure of the test was to set up a static method to return data from the Customers table of Northwind, suitable for binding to an ObjectDataSource in ASP.NET. I ran two sets of tests, one to return six columns from all rows, and one to return the same six columns from a single row. Each set contained the following variations:
- DataReader, to provide baseline performance to compare against other technologies.
- DataTable, using classic ADO.NET tools (DataAdapter running a command to fill a table).
- LINQ to SQL, using a compiled query, and with object tracking turned off, to maximize performance. The results list was projected directly from the query.
- LINQ to Entity Framework, using a compiled query to maximize performance. As with LINQ to SQL, the results list was projected directly from the query.
- Entity SQL, as an alternative to LINQ, querying the Entity Framework. The code structure for Entity SQL uses a reader, similar to using a DataReader with T-SQL.
For both LINQ to SQL and Entity Framework, I used the visual designer tools to include only the Customers table in the model.
The test measured elapsed time and total processor time. The difference could be assumed to include time used by SQL Server, as well as any other out-of-process time. I ran the tests on a Dell Latitude E6500 with Vista Ultimate, SQL Server 2008, an Intel Core 2 Duo P9500 (2.5 GHz), 4GB RAM, and 7200 RPM disk. The system was idle except for tests; test runs were fairly consistent in timings, as measured by standard deviations over a set of 10 test runs.
The test program ran each query once to ensure that all code was loaded and JITed, and all access plans and data were cached, so that startup time was excluded for each scenario. The program then ran 10,000 queries and collected aggregate time and working set information. For each scenario, the test program was run once, then run 10 times to record timing data.
Keep in mind that the test was designed to measure only the code execution for queries. There is no business logic, and the test design ensured that start-up costs were excluded from test results.
As expected, using a DataReader with raw T-SQL is the best performer, and the technology of choice for extremely large data volumes and for applications where performance is the only thing that matters. The DataReader used .40 milliseconds (elapsed) to retrieve 92 rows and store the data in a list, and only .15 milliseconds for a single row.
The DataTable with classic ADO.NET performed almost as well, using .58 milliseconds (elapsed) for 92 rows and .18 milliseconds for a single row. In the chart above, the DataReader is used as a baseline for comparison, so the relative cost of using a DataTable and DataAdapter was 1.4 for 92 rows, and 1.2 for a single row. That's not a lot of overhead in exchange for using a standardized structure that includes metadata on names and data types. Memory usage was virtually identical to memory usage for the DataReader.
LINQ to SQL also performed very well, using .63 milliseconds (elapsed) for 92 rows and .36 milliseconds for a single row. The performance ratio compared to the DataReader is 1.6 for 92 rows and 2.3 for a single row. Compared to the DataTable, the performance ratio (not charted) was 1.2 for 92 rows and 1.9 for a single row. LINQ to SQL used 40 MB additional memory, based on the final working set size at the end of each run.
That's very decent performance, considering the additional overhead, although Rico Mariani of Microsoft got even better numbers (and I'd love to know how to get closer to those results). In my tests, all queries established new connection objects (or data contexts) for each query, but I can't tell if Rico did the same in his performance tests. This may account for the difference in performance.
With Entity Framework, I found significant additional performance costs. LINQ to EF used 2.73 milliseconds (elapsed) to retrieve 92 rows, and 2.43 milliseconds for a single row. For 92 rows, that's a performance ratio of 6.8 compared to the DataReader, 4.7 compared to the DataTable, and 4.4 compared to LINQ to SQL (the latter two are not charted above). For a single row, LINQ to EF used 2.43 millisecond (elapsed), with performance ratios of 16.0 compared to the DataReader, 13.2 compared to the DataTable, and 6.8 compared to LINQ to SQL. Memory usage for LINQ to EF was about 130 MB more than for the DataReader.
Entity SQL queries to EF performed about the same as LINQ to EF, with 2.78 milliseconds (elapsed) for 92 rows and 2.32 milliseconds for a single row. Memory usage was similar to LINQ to EF.
Some of the conclusions are obvious. If performance is paramount, go with a DataReader! Entity Framework uses two layers of object mapping (compared to a single layer in LINQ to SQL), and the additional mapping has performance costs. At least in EF version 1, application designers should choose Entity Framework only if the modeling and ORM mapping capabilities can justify that cost.
In between those extremes, the real surprise is that LINQ to SQL can perform so well. (The caveat is that tuning LINQ to SQL is not always straight-forward.) The advantage that LINQ (including LINQ to EF) offers is in code quality, resulting from two key improvements over classic ADO.NET:
- Names and data types are strongly enforced from top to bottom of your application. Very specifically, that means all the way down to the tables in the database. LINQ uses .NET types, further simplifying the developer's life.
- DataTables and DataSets bring the relational data model rather intrusively into the code. To process data in a DataTable, you must adapt your code to the DataTable structure, including DataRows, DataColumns, and (with DataSets and multiple tables) DataRelationships. By contrast, LINQ is a first-class language component in .NET, with object-relational mapping inherent in the LINQ data providers and models. Processing data with LINQ feels like processing object collections, because that's exactly what you're doing.
So for now, LINQ to SQL is a winner! As Entity Framework version 2 takes shape, it will be time to re-evaluate.
Edited to fix math errors (blush). These affect timings only, but since the chart is based on ratios, those numbers are still correct.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262600.90/warc/CC-MAIN-20190527125825-20190527151825-00347.warc.gz
|
CC-MAIN-2019-22
| 6,645 | 25 |
http://stackoverflow.com/questions/7732685/python-local-modules
|
code
|
I have several project directories and want to have libraries/modules that are specific to them. For instance, I might have a directory structure like such:
myproject/ mymodules/ __init__.py myfunctions.py myreports/ mycode.py
Assuming there is a function called
myfunctions.py, I can call it from
mycode.py with the most naive routine:
But to be more sophisticated about it, I can also do
import sys sys.path.append('../mymodules') import myfunctions myfunctions.add(1,2)
Is this the most idiomatic way to do this? There is also some mention about modifying the
os.environ['PYTHONPATH']?), but is this or other things I should look into?
Also, I have seen
import statements contained within class statements, and in other instances, defined at the top of a Python file which contains the class definition. Is there a right/preferred way to do this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00002-ip-10-164-35-72.ec2.internal.warc.gz
|
CC-MAIN-2016-26
| 849 | 11 |
https://www.encyclopedie-environnement.org/en/author/marion_cordonnier/
|
code
|
Marion Cordonnier did her thesis at LEHNA (Université de Lyon), then spent a year at ESE (Université Paris-Saclay) as a Research Engineer. She is now a post-doctoral fellow at the University of Regensburg, Germany, in a laboratory whose themes revolve mainly around the ecology and evolution of social insects, in the au Département de zoologie / biologie évolutive. In recent years, his research has focused on the impact of global changes (urbanization, climate change and biological invasions) on interactions between species, including in particular genetic exchanges, predation relationships and competitive interactions. His work mobilized a variety of tools, combining for example landscape genetics, behavioral biology, and chemical ecology. His research mainly concerned ants, and occasionally other biological models (birds, mammals).
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00056.warc.gz
|
CC-MAIN-2024-10
| 848 | 1 |
https://cmmid.github.io/topics/covid19/mixing-patterns.html
|
code
|
Reports from the CoMix social contact survey
This page list all our work on Covid-19 mixing patterns.
We estimated population changes in the UK using the location of Facebook users and show how time-varying populations influence a model of COVID-19.
We present one full year of CoMix contact survey data from participants in England between March 2020 and March 2021 to track social contact behaviour during the Covid-19 pandemic.
We present the analyses of the impact of national and local restrictions on the number of setting-specific contacts that people have prior to and during the restrictions from an ongoing survey (CoMix) which tracks social contact behaviour during the Covid-19 pandemic.
Combining CoMix contact survey data with profiles in infectiousness and susceptibility to estimate the effect on the reproduction number.
We update the synthetic contact matrices with the most recent data, comparing them to measured contact matrices, and develop customised contact matrices for rural and urban settings. We use these to explore the effects of physical distancing interventions for the COVID-19 pandemic in a transmission model.
Simulated isolation, tracing and quarantine control strategies for SARS-CoV-2 in a real-world social network generated from high resolution GPS data.
We analyse social contact data from Kenyan informal settlements to estimate if COVID-19 control measures have affected disease transmission, and economic and food security
Interactive dashboard of Facebook colocation data
We present the first results of an ongoing survey (CoMix) to track social contact behaviour during the Covid-19 pandemic, and compare social mixing to patterns found in a previous survey.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00478.warc.gz
|
CC-MAIN-2022-27
| 1,704 | 11 |
http://www.pctools.com/mrc/infections/id/Trojan.Spy.VB/
|
code
|
Trojan.Spy.VB - Infection Database
These pages list all available infections that Spyware Doctor is capable of removing. Infections are presented by name, level of threat and a brief description and have been organised in alphabetical order by name.
On this page you can:
- Browse through the infections by page or first letter.
- Search for a specific infection using the Search field.
- Click on an infection in the list below to view further details on a specific infection
Browse infections beginning with
Displaying infections 1-5 of 5 found.
|Trojan-Spy.VB.AF||Trojan.Spy.VB.AF drops other spying malware, such as SpyAnyTime Keylogger and ...|
|Trojan-Spy.VB.GG||Trojan.Spy.VB.GG steals information gathered from your computer and sends it to ...|
|Trojan-Spy.VB.HJ||Trojan.Spy.VB.HJ logs keystrokes and send them to the attacker via email. ...|
|Trojan-Spy.VB.NB||Trojan.Spy.VB.NB is a trojan which will capture all keystrokes from users and ...|
|Trojan-Spy.VB.QF||Trojan.Spy.VB.QF tries to download malicious files, searches the URL cache to ...|
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00587.warc.gz
|
CC-MAIN-2017-26
| 1,055 | 13 |
https://archive.org/details/wordstar_2.26_osborne1_1981_micropro
|
code
|
November 3, 2013
Wordstar's influence isn't limited to word processing
Wordstar, like Visicalc, gave people reasons to own computers and drove the "paperless office"...which actually ended up using MORE paper because reprinting is far easier than retyping an entire document.
Wordstar used "markup" codes to change the appearance and output of text - centering, bold, line breaks, etc. Wordstar was not the first (the idea had been around since 1970), it was the first that was widely used.
Wordstar's popularity and use of markup was very likely an influence on the creation of other markup languages, most notably HTML, without which the internet would be limited to Usenet, Telnet and Gopher.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00337.warc.gz
|
CC-MAIN-2023-23
| 695 | 5 |
https://unicode-org.atlassian.net/browse/CLDR-5991
|
code
|
When we leave beta, we want to revert votes but keep users.
We used to just nuke the vote table. But now, we preserve votes.
Probably, we should keep each release's votes in a separate table, and keep 'beta' votes in a beta table. That way we could revert just by deleting the beta table, or even simply ignoring it.
For future discussion. Probably best post 24.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259126.83/warc/CC-MAIN-20190526105248-20190526131248-00313.warc.gz
|
CC-MAIN-2019-22
| 362 | 4 |
http://www.tomshardware.com/forum/26072-63-mcafee-freezes-computer
|
code
|
hi, i just got a brand new laptop with mcafee pre-installed...when i boot up, the mcafee screen says "your computer is at risk", gives me a choice to "check status" or "close". I picked on Check Status, it say the status is off, turn it on... I pressed "turn on" ..then the screen just show running and I can't even close the window... and freezes up the laptop... so frustrating!!!!
I may be jumping to conclusions a little quick, but it sounds infected, was this a display Laptop? Some business use Display Laptops to do quick data transfers for customer, I have heard that these Lappy/Desktops get infected on a regular basis. '
Try getting the serial from the product if you can see it before it locks up, then uninstall the program, and jump on Cnet (Trusted website) and download the same version that has allot of download requests, this will determine if the Pc is infected or there is an issue with the software itself.
>Downloading an application with allot of download requests will reduce the chance of getting a malicious link.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125654.84/warc/CC-MAIN-20140914011205-00288-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 1,040 | 4 |
https://community.flexera.com/t5/InstallShield-Forum/Feature-request-state-is-null-when-running-upgrade-installer/m-p/102733
|
code
|
Feature request state is null when running upgrade installer
I have Feature A and B with condition A and B defined in a Basic MSI project. When run the installer first time and condition A is true but B is false, then feature A is installed.
After several days, the condition B is became true( which depend on a registry entry). I updated the installer and run it , but Feature B is not installed. The feature state of B is:
Feature: Feature B; Installed: Absent; Request: Null; Action: Null
In the log, MigrateFeatureStates: based on existing product, setting feature 'Feature B' to 'Absent' state.
To install Feature B, I have to uninstall the product and install again. How can I install Feature B without uninstallation? Thanks for any help.
Hi @shoogun ,
The acutal meaning of msi log of the line "Feature: Feature B; Installed: Absent; Request: Null; Action: Null"
|Request: Null||No request.|
|Action: Null||No action taken.|
|Installed: Absent||Component or feature is not currently installed.|
As well by default windows installer minor upgrade works in this way that if it is already installed feature only it will try to validate else it will be considered as absent.
But you can try below to check whether it works:
- REINSTALLMODE can be of
"omus" by default, but I guess
"emus"is also ok (
Reinstall if the file is missing, or is an equal or older version).
Hope it helps,
Thanks for the detail explanation. I tried to add REINSTALLMODE to the Property Manager and edit the value to emus. But the Feature B is still not installed. I have to uninstall and install again.
Hi @shoogun ,
"It is not a major upgrade. The product version is not change. I just rebuild the install project and install again"
If there is no change in product version it doesn't belong to neither minor upgrade nor major.Can you do MSI difference between both the msi files that you got as outcome of rebuilding as well the base msi?
REINSTALLMODE might not work for non-upgrade cases.
Did you try adding files or any other settings before rebuilding the project?MSI diff tool of Installshield can help you find what had been added in the rebuilt msi.
MigrateFeatureStates tells that it is upgrade scenario.Please read the below Microsoft link to get more about MigrateFeatureStates:
I change the product version from 126.96.36.199 to 188.8.131.52 and add the property REINSTALLMODE with value "emus". The feature B is not installed when I install 184.108.40.206 because its condition is not meet (The condition is check a registry entry). Then I add the registry entry and run the minor upgrade installer (220.127.116.11), but I found the Feature B is still not installed. The state of Feature B is Request: Null. If I uninstall and install 18.104.22.168 again, the feature B is installed correctly. Is it possible let Feature B installed without uninstallation in this scenario? Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506339.10/warc/CC-MAIN-20230922070214-20230922100214-00469.warc.gz
|
CC-MAIN-2023-40
| 2,876 | 26 |
https://pkmncollectors.livejournal.com/1501211.html
|
code
|
Please pay me ASAP - the amounts include shipping from me to you!
My paypal address is: the.linea.alba(at)gmail.com; please put your LJ name/what you are buying in the memo. ^_^
Giratina Blok set - callyfin - $40
Regigiggas Blok - ridi - $8
Also, eristell_neko, please pay me $14 for the Shaymin neck strap!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00489.warc.gz
|
CC-MAIN-2020-10
| 307 | 5 |
https://kolibri-dev.readthedocs.io/en/develop/frontend_architecture/dependencies.html
|
code
|
Dependencies are tracked using
yarn - see the docs here.
We distinguish development dependencies from runtime dependencies, and these should be installed as such using
yarn add --dev [dep] or
yarn add [dep], respectively. Your new dependency should now be recorded in package.json, and all of its dependencies should be recorded in yarn.lock.
Individual plugins can also have their own package.json and yarn.lock for their own dependencies. Running
yarn install will also install all the dependencies for each activated plugin (inside a node_modules folder inside the plugin itself). These dependencies will only be available to that plugin at build time. Dependencies for individual plugins should be added from within the root directory of that particular plugin.
To assist in tracking the source of bloat in our codebase, the command
In addition, a plugin can have its own webpack config, specified inside the
buildConfig.js file for plugin specific webpack configuration (loaders, plugins, etc.). These options will be merged with the base options using
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363157.32/warc/CC-MAIN-20211205100135-20211205130135-00348.warc.gz
|
CC-MAIN-2021-49
| 1,057 | 10 |
http://www.antionline.com/showthread.php?225386-Mac-questions&p=503694&mode=threaded
|
code
|
April 20th, 2002, 02:52 PM
Okay, so they're only partially security questions, but I figured this might be the best place to post, and draw the least amount of Mac bashing.
First off, I'm looking at potentially implementing a Powermac G4 onto an existing network, to be used as a multimedia development studio running a wide variety of graphics and video apps. Most likely dual proc 1ghz with 1.5 gig ram, scsi drives, etc. The hardware config isn't the issue though.
I have very limited exp. with Macs, but I'm curious as to what network protocols are supported, aside from Appletalk and TCP/IP. I can most likely manage the integration via TCP/IP, which will give users access to file shares, but I don't know if I can get full commincation via Pathworks (openVMS on a pair of Compaq Alphas) for logins and permission settings.
Also, with Mac OS X, for configuring services, permissions, and settings, to lock it down, I would assume I will need to edit the .cnf files (or equiv.) on the mac. Or is there a simpler way? Being Unix based, I think I have it figured out, but without actually having the system yet, I'm not sure how much OS X is like Unix, or what has been changed.
Also, does anyone know where I can find a side by side comparision between the G3 and G4 chipsets? I may need to add a couple Apple laptops for presentation machines, and I want to keep the bottom line ($$$) fairly low. I've tried Apple's site, as well as numerous searches on Google, and I keep coming up blank, aside from Tech Docs on various machines.
Thanks for any help with the above,
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00264-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,572 | 7 |
https://maxtrain.com/product/migrate-open-source-data-workloads-to-azure/
|
code
|
Maxtrain.com - [email protected] - 513-322-8888 - 866-595-6863
This course will enable the students to understand Azure SQL Database, and educate the students on what is required to migrate MySQL and PostgreSQL workloads to Azure SQL Database.
This module describes the benefits and architecture of Azure SQL DB.
Lab: Creating Source OSS Databases
After completing this module, students will understand:
Lab: Migrating MySQL DB Workloads to Azure SQL DB
This module describes the benefits and process of migrating PostgreSQL DB workloads to Azure SQL DB.
Lab: Migrating PostgreSQL DB Workloads to Azure SQL DB
Some knowledge of open source relational database management systems such as Postgres and/or MySQL, SQL administration, backup and recovery techniques i.e. dumping table data.
Database developers who plan to migrate their MySQL or Postgres DB workloads to Azure SQL DB.
MySQL/Postgres administrators seeking to raise awareness of the features and benefits of Azure SQL DB.
1 Day Course
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00343.warc.gz
|
CC-MAIN-2023-06
| 995 | 12 |