url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://issues.apache.org/jira/browse/ZOOKEEPER-2625
|
code
|
I provision a vagrant vm that installs zookeeper into /home/vagrant/zk and adjusts all owner and read/write rights.
With the vagrant user, I start zookeeper as bin/zkServer.sh start /vagrant/data/zoo.cfg
However, the folder data? (or data^M) gets created with the PID inside, instead of putting it into the data folder, which contains the version-2 folder.
Since I'm using the official start scripts, I'm at a loss.
Also, the data? folder comes with root:root ownership, which is strange, as zKServer.sh is executed from the vagrant user.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099892.46/warc/CC-MAIN-20231128151412-20231128181412-00279.warc.gz
|
CC-MAIN-2023-50
| 538 | 5 |
http://appleshinenyc.blogspot.com/2012/03/business-card-organizers.html
|
code
|
Last month was a busy blur of introductions, handshakes and business card swapping. I've amassed a pile of cards, stuffed in a ziploc bag inside my desk drawer. A couple times a week, I make a point to process the contact info to my computer. Then I got to thinking, what about business card organizers?
If you're a visual person (like me), then you might appreciate keeping all your contacts at hand, flipping through them one by one or scrolling down page by page of attractively imaged cards. Here are a few flashy organizers that keep your contacts in one place:
I like the idea of a mini-binder; sleek, slender and you can tuck it away on the bookshelf. Yet those rolodex options are might fine-looking for a desktop. Or maybe you're not a paper person and you'd prefer to keep your contacts online. If so, then check out this cool app that enters info for you with a snap of your camera phone.
Tell me, Appleshiners, how do you organize your business cards?
PS- Look at this nifty card stand to put on your desk.
These creative card ideas are really out there.
And when it comes to handing out your own business cards, my friend, Michael, says, 'a card is like a kiss'. What??
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658844.27/warc/CC-MAIN-20190117062012-20190117084012-00105.warc.gz
|
CC-MAIN-2019-04
| 1,182 | 7 |
http://www.beatcanvas.com/content_view.asp?blogid=1&areaid=1&id=1721
|
code
|
I used to have the Twitter gizmo here on my home page. Not no more... too buggy and unreliable. It was showing up as a big black box more often than it was appearing normally, and in IE, it was prompting my debugger. Yuck.
I normally use Opera as my browser, which I like quite a bit as it is fast and safe, so I don't notice issues in IE. But even in Opera, the Twitter gizmo was just a big black box.
I don't think I'll be tweeting as much. Twitter is great for trying to fit a concise message inside 140 characters, such as:
Capitalism: "I look forward to the reward of my labor."And I find that Twitter is the fastest news source there is. But when you follow several hundred people, then much of it is just mashed together. So it still has value, but I won't frequent it as much.
Liberalism: "I look forward to the reward of your labor, too."
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00244.warc.gz
|
CC-MAIN-2023-50
| 847 | 5 |
https://msdn.microsoft.com/en-us/library/system.xaml.hosting.aspx
|
code
|
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.
.NET Framework 4.6 and 4.5
Provides classes related to XAML hosting.
||Infrastructure. A build provider for server side XAML (.xamlx) documents.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737956371.97/warc/CC-MAIN-20151001221916-00196-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 306 | 4 |
https://www.ilr.cornell.edu/news/faculty/ilr-welcoming-two-professors
|
code
|
ILR Welcoming Two Professors
Y. Samuel Wang and Dana Yang will join the Department of Statistics and Data Sciences during the 2021-22 academic year.
“Statistics and Data Sciences has had a very successful faculty search this year with two great new hires, said Alex Colvin, Ph.D. ’99, the Kenneth F. Kahn ’69 Dean and the Martin F. Scheinman’75, MS ’76, Professor of Conflict Resolution. “Dana Yang and Sam Wang are excellent scholars who will add to the strength of the department and of ILR. We look forward to welcoming them to our community.”
Y. Samuel Wang, Department of Statistics and Data Sciences
• Ph.D., Statistics, University of Washington, 2018
• B.A., Applied Math, Rice University, 2010
Wang has broad interests across statistics, machine learning and data science, but much of his work is in the subfield of "graphical models." In this area, researchers consider how each variable in a complex system might be dependent or independent of the other variables.
He primarily works in theory and methods, however, the methods he works on can be applied to functional magnetic resonance imaging data to discover how different regions of the brain interact, or applied to financial data to see how the performance of some stocks affect the performance of other stocks, or to systems biology data to see how certain proteins might regulate other proteins. Wang also enjoys connecting statistics and data science to social science questions, as one of his current projects seeks to measure gender bias in co-authorship team formation.
Dana Yang, Department of Statistics and Data Sciences
• Ph.D., Statistics & Data Science, Yale, 2019
• M.A., Statistics, Yale, 2014
• B.S., Mathematics, Tsinghua University, 2013
Yang works in the broad area of high-dimensional statistics and machine learning. One of her focuses is large-scale network analysis, more specifically, learning hidden network structures from noisy observations. She primarily works on determining the fundamental statistical limit for recovery – a threshold beyond which the data becomes too “noisy” and the hidden structure cannot be recovered reliably. Designing algorithms that attain the statistical limit is important for practitioners working on real network data.
Besides the natural applications in social networks, Yang’s work can also be applied to a wide class of other problems including genome sequencing and particle tracking, given the versatility of network models. She is also interested in the ethics and safety of machine learning. Some of her recent works have involved the design of learning frameworks that protect the learner against eavesdropping attacks, for example, in federated learning.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00312.warc.gz
|
CC-MAIN-2021-39
| 2,720 | 14 |
http://theoreti.ca/?paged=2
|
code
|
The New York Times has a nice short video on cybersecurity which is increasingly an issue. One of the things they mention is how it was the USA and Israel that may have opened the Pandora’s box of cyberweapons when they used Stuxnet to damage Iran’s nuclear programme. By using a sophisticated worm first we both legitimized the use of cyberwar against other countries which one is not at war with, and we showed what could be done. This, at least, is the argument of a good book on Stuxnet, Countdown to Zero Day.
Now the problem is that the USA, while having good offensive capability, is also one of the most vulnerable countries because of the heavy use of information technology in all walks of life. How can we defend against the weapons we have let loose?
What is particularly worrisome is that cyberweapons are being designed so that they are hard to trace and subtly disruptive in ways that are short of all out war. We are seeing a new form of hot/cold war where countries harass each other electronically without actually declaring war and getting civilian input. After 2016 all democratic countries need to protect against electoral disruption which then puts democracies at a disadvantage over closed societies.
Explainability – Can someone get an explanation as to how and why an AI made a decision that affects them? If people can get an explanation that they can understand then they can presumably take remedial action and hold someone or some organization accountable.
Transparency – Is an automated decision making process fully transparent so that it can be tested, studied and critiqued? Transparency is often seen as a higher bar for an AI to meet than explainability.
Responsibility – This is the old computer ethics question that focuses on who can be held responsible if a computer or AI harms someone. Who or what is held to account?
In all these cases there is a presumption of process both to determine transparency/responsibility and to then punish or correct for problems. Otherwise people will have no real recourse.
One of issues that interests me the most now is the history of this discussion. We tend to treat the ethics of AI as a new issue, but people have been thinking about how automation would affect people for some time. There have been textbooks for teaching Computer Ethics like that of Deborah G. Johnson since the 1980s. As part of research we did on how computer were presented in the news we found articles in the 1960s about how automation might put people out of work. They weren’t thinking of AI then, but the ethical and social effects that concerned people back then were similar. What few people discussed, however, was how automation affected different groups differently. Michele Landsberg wrote a prescient article on “Will Computer Replace the Working Girl?” in 1964 for the women’s section of The Globe and Mail that argued that is was women in the typing pools that were being put out of work. Likewise I suspect that some groups be more affected by AI than others and that we need to prepare for that.
Addressing the issue of how universities might prepare for the disruption of artificial intelligence is a good book, Robot-Proof: Higher Education in the Age of Artificial Intelligence by Joseph Aoun (MIT Press, 2017).
Instead of educating college students for jobs that are about to disappear under the rising tide of technology, twenty-first-century universities should liberate them from outdated career models and give them ownership of their own futures. They should equip them with the literacies and skills they need to thrive in this new economy defined by technology, as well as continue providing them with access to the learning they need to face the challenges of life in a diverse, global environment.
Operation Jane Walk appropriates the hallmarks of an action roleplaying game – Tom Clancy’s The Division (2016), set in a barren New York City after a smallpox pandemic – for an intricately rendered tour that digs into the city’s history through virtual visits to some notable landmarks. Bouncing from Stuyvesant Town to the United Nations Headquarters and down the sewers, a dry-witted tour guide makes plain how NYC was shaped by the Second World War, an evolving economy and the ideological jousting between urban theorists such as Robert Moses and Jane Jacobs. Between stops, the guide segues into musical interludes and poetic musings, but doesn’t let us forget the need to brandish a weapon for self-defence. The result is a highly imaginative film that interrogates the increasingly thin lines between real and digital worlds – but it’s also just a damn good time.
It becomes clear as one reads on that none of the assets of the site were original; they were all clipart or music taken from elsewhere. Nonetheless LaCarte and others were able to make some money on the success of the site.
I personally think the first viral internet meme was the Mrs. Fields (or Neiman Marcus) cookie recipe story that circulated by email. It was an urban legend about being billed $250 for a recipe by a Mrs. Fields store and then sharing that recipe. According to Snopes this legend has quite a history going back to a 1948 cookbook.
Jingwei, a bright digital humanities student working as a research assistant, has been playing with generative AI approaches from aiweirdness.com – Letting neural networks be weird. Janelle Shane has made neural networks funny by using the to generate things like New My Little Ponies. Jingwei scraped titles of digital humanities conferences from various conference sites and trained and generated new titles just waiting to be proposed as papers:
The Catalogue of the Cultural Heritage Parts
Automatic European Pathworks and Indexte Corpus and Mullisian Descriptions
Minimal Intellectual tools and Actorical Normiels: The Case study of the Digital Humanities Classics
Automatic European Periodical Mexico: The Case of the Digital Hour
TEIviv Industics – Representation dans le perfect textbook
Conceptions of the Digital Homer Centre
Preserving Critical Computational App thinking in DH Languages
DH Potential Works: US Work Film Translation Science
Translation Text Mining and GiS 2.0
DH Facilitating the RIATI of the Digital Scholar
Shape Comparing Data Creating and Scholarly Edition
DH Federation of the Digital Humanities: The Network in the Halleni building and Web Study of Digital Humanities in the Hid-Cloudy
The First Web Study of Build: A “Digitie-Game as the Moreliency of the Digital Humanities: The Case study of the Digital Hour: The Scale Text Story Minimalism: the Case of Public Australian Recognition Translation and Puradopase
The Computational Text of Contemporary Corpora
The Social Network of Linguosation in Data Washingtone
Designing formation of Data visualization
The Computational Text of Context: The Case of the World War and Athngr across Theory
The Film Translation Text Center: The Context of the Cultural Hermental Peripherents
The Social InfrastructurePPA: Artificial Data In a Digital Harl to Mexquise (1950-1936)
EMO Artificial Contributions of the Hauth Past Works of Warla Management Infriction
DAARRhK Platform for Data
Automatic Digital Harlocator and Scholar
Complex Networks of Computational Corpus
IMPArative Mining Trail with DH Portal
Pursour Auchese of the Social Flowchart of European Nation
Anatomy of an AI System – The Amazon Echo as an anatomical map of human labor, data and planetary resources. By Kate Crawford and Vladan Joler (2018)
Kate Crawford and Vladan Joler have created a powerful infographic and web site, Anatomy of an AI System. The dark illustration and site are an essay that starts with the Amazon Echo and then sketches out the global anatomy of this apparently simple AI appliance. They do this by looking at where the materials come from, where the labour comes from (and goes), and the underlying infrastructure.
Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data.
The essay/visualization is a powerful example of how we can learn by critically examining the technologies around us.
Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product.
Queer places are, by definition, sites of accretion, where stories, memories, and experiences are gathered. Queer place, in particular, is reliant on ephemeral histories, personal moments and memories. GoQueer intends to integrate these personal archives with places for you to discover.
I recently downloaded and started playing the iOS version of GoQueer from the App Store. It is a locative game from my colleague Dr. Maureen Engel.
Engel reflected about this project in a talk on YouTube titled Go Queer: A Ludic, Locative Media Experiment. Engel nicely theorizes her game not once, but in a doubled set of reflections show how theorizing isn’t a step in project design, but continuous thinking-through.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207146.96/warc/CC-MAIN-20190327000624-20190327022624-00005.warc.gz
|
CC-MAIN-2019-13
| 9,164 | 47 |
https://content.minetest.net/packages/j45/j_mute/
|
code
|
What this mod does
This mod is used to moderate chat, if someone is spamming, you use /mute player-name and they wont be able to type in chat, then, when you think they have learnt a lesson, you use /unmute player-name, or , if you want to mute someone for a certain amount of time, use /mutesec player-name seconds, and it will automatically unmute them after the certain amount of time.
yayyer: the idea for this mod
HimbeerserverDE: help programming
Fleckenstein: help programming
quote: "trees are green"
To contact me, message me on discord: j45#7171
now there will be no spam)
Does what it says on the tin
It's a good mute mod, and has also become game agnostic now. Unless you want to manually manage
shoutprivileges, this is a must in any server's modset.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656675.90/warc/CC-MAIN-20230609100535-20230609130535-00465.warc.gz
|
CC-MAIN-2023-23
| 763 | 11 |
http://lists.openmoko.org/pipermail/community/attachments/20070705/bc005fe4/attachment.htm
|
code
|
I really like this sort of lego block approach to mobile devices. Some people want bluetooth, some want gps, some want cameras, some want wifi, extra storage, IR, etc, but not everyone necessarily wants all those things. I think this is a situation where mobile devices could take an example from desktop PCs. I am dreaming of some future devices consisting of cases which can hold various module blocks.
<br><br>The most basic mobile device would contain three things:<br>1. Case: this would provide the main housing for all modules, and include an user input buttons, displays, etc.<br>2. Main CPU module<br>3. Power module (essentially just a battery)
<br>Everything else would be an optional peripheral module, connected over some standard bus (i2c?)<br>The number and types of peripherals supported would mostly depend on your case type.<br><br>For example with this concept you could theoretically swap your GSM module with a CDMA module, update your software and you're good to go on your new network. Another scenario could be that a user only wants or can afford the base model at the moment. Then later they can decide to add that bluetooth or gps module they are missing. Defining a standard battery form factor would be pretty awesome in itself. People who prefer the minimal devices could get the smaller, more
portable cases which only fit a few modules, while others who want all
the whizbang features can get the larger advanced cases.<br>
<br>If the modules are directly physically connected, they don't all need
their own batteries/bluetooth/etc, just some common data bus and power
interface. For optimum compatibility, you would want to standardize the module block form factor. You could have blocks of various sizes, depending on the complexity of the module. for example, maybe a gps unit can fit in a 1x1x1 block size, but maybe a gsm requires a 2x1x1 block size. Battery might be 4x4x1 or something. As long as the dimensions are in multiples of the same units, there is a good chance of fitting all the modules together in your device.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423222.65/warc/CC-MAIN-20170720141821-20170720161821-00430.warc.gz
|
CC-MAIN-2017-30
| 2,061 | 8 |
https://www.connectionstrings.com/camelot-net-connector-for-sharepoint/info-and-download/
|
code
|
This .NET Framework Class Library is provided by Bendsoft.
The main functionality of the class library is contained in the file Camelot.SharePointConnector.dll.
Add a reference to the assembly Camelot.SharePointConnector and include the Camelot.SharePointConnector.Data namespace. Instantiate a new SharePointConnection connection object. Set the connection string and open the connection.
VB.NET code sample
Imports Camelot.SharePointConnector.Data Dim myConnection As SharePointConnection = New SharePointConnection() myConnection.ConnectionString = myConnectionString myConnection.Open() ' execute queries, etc myConnection.Close()
C# code sample
using Camelot.SharePointConnector.Data; SharePointConnection myConnection = new SharePointConnection(); myConnection.ConnectionString = myConnectionString; myConnection.Open(); // execute queries, etc myConnection.Close();
The Camelot .NET Connector lets you easily develop .NET applications that require secure, high-performance data connectivity with SharePoint using standard SQL language. It implements the required ADO.NET interfaces and integrates into ADO.NET aware tools. Developers can build applications using their choice of .NET languages. Besides standard CRUD operations, the Connector supports features that you will not find in any other tool, such as JOIN and UNION. The Connector can be used by any developer with basic SQL knowledge.
More info about this class library can be found at the Bendsoft product page.
This .NET Framework Class Library, Camelot .NET Connector for Microsoft SharePoint, can be downloaded here.
The Camelot .NET Connector for Microsoft SharePoint class library can be used to connect to the following data sources by using the following connection string references:
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104676086.90/warc/CC-MAIN-20220706182237-20220706212237-00267.warc.gz
|
CC-MAIN-2022-27
| 1,760 | 11 |
https://download.cnet.com/copyright-free-music/3000-2141_4-78549426.html
|
code
|
Copyright Free Music is an app lets the user listen to free Music From Ncs Music Channel
which provides them for free
This app provides An amazing smooth music player
Easy To use.
Background Music Player.
All Rights and Credits to NoCopyrightSounds.
This app Doesn't Provides Offline Music (this is not a Downloader music app)
no caching or downloading.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511055.59/warc/CC-MAIN-20231003060619-20231003090619-00501.warc.gz
|
CC-MAIN-2023-40
| 353 | 8 |
http://headgum.com/episode/sexting-w-jake-amir/
|
code
|
She Didn't Text Back Sexting w/ Jake & Amir!
February 29, 2016
When two podcast that are oddly similar collide. If you don’t know Jake & Amir you should definitely go listen to their podcast “If I Were You”. You might hear us on the latest episode!
AUSTIN LIVE SHOW TICKETS: http://bit.ly/1QZQFQ7
Past Due Music video: http://bit.ly/20D9sG5
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00131-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 346 | 5 |
https://www.computing.co.uk/ctg/blog-post/1864784/let-s-pretend
|
code
|
This week’s job of the week, and perhaps of the year, was spotted by reader Simon Reed. The candidate must: champion delivery of the systems development project portfolio, set out a clear path for the...
The UK IT Industry Awards 2018 was a huge success. But don't take our word for it, check out this gallery of the the nation's top IT professionals letting their hair down!
AI that can truly replace humans is still in the distant future, and automation has a long way to go
25,000 'events' Office 365 recorded and shared among 30 engineering teams at Microsoft
Computing presents all the winners of the UK IT Industry Awards 2018, in glorious technicolour
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742906.49/warc/CC-MAIN-20181115182450-20181115204450-00072.warc.gz
|
CC-MAIN-2018-47
| 660 | 5 |
https://proxyti.com/is-it-practical-to-count-on-a-database-of-measurement-over-50-gb-some-300-million-data-to-be-imported-inside-3-four-hours-on-a-single-server-mysql/
|
code
|
I’ve tons of of hundreds of thousands of rows in a textual content/csv file (genomics database btw – every file is lower than 255 characters lengthy…).
Ideally I would wish to make them searchable since proper now my finest guess is spiting them (a bit of assist from cygwin!) and studying them one after the other as a textual content file ~500mb from notepad++ (sure…i do know…) – so that is very inconvenient and caveman-like method.
I would like to make use of MySQL however possibly others, have finances of as much as $500 for Amazon cases when wanted – possibly 32gb ram some xeon gold and 200gb laborious disk on Amazon can do it? No drawback to make use of as much as 10 cases every of which doing concurrent insert/loading.
I learn somebody had achieved 300,000 rows/second utilizing ‘load information infile’ on a neighborhood server with ssd and 32gb ram – if I make it to even 50,000 rows/second after which be capable to question it with say phpmyadmin in regular time – I would be joyful. Thanks!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00554.warc.gz
|
CC-MAIN-2022-21
| 1,033 | 4 |
https://flylib.com/books/en/2.516.1/controlling_access_to_router_mibs.html
|
code
|
You want to limit the access of a group of NMS systems so they can gather only basic system and chassis information from the router.
Use the following commands to define the MIB branches that a community can access:
[edit snmp] aviva@router1# set view chassis-info-only oid jnxBoxAnatomy include aviva@router1# set view chassis-info-only oid snmpMIBObjects include aviva@router1# set view chassis-info-only oid system include
Then associate the MIB view with the community:
[edit snmp] aviva@router1# set community chassis-access-only view chassis-info-only
By default, an SNMP community can access the whole MIB installed on the router. You can limit the MIB access that a community has by creating partial views of the MIB. This recipe creates a community that can view information only about objects in the Juniper Networks chassis MIB and in the standard MIB-II MIB. Controlling access consists of two steps: create the view itself using the set view commands and then associate the view with the community using the set community command.
If you want a community to be able to read most but not all of the MIB, you can restrict access to just a few MIB branches.
You might want to give access to all MIB branches except the two in which the JUNOS software allows SNMP Set operations, the ping and traceroute MIB branches:
[edit snmp] aviva@router1# set view ping-traceroute-exclude oid jnxPingMIB exclude aviva@router1# set view ping-traceroute-exclude oid jnxTraceRouteMIB exclude aviva@router1# set community public view ping-traceroute-exclude
Router Configuration and File Management
Basic Router Security and Access Control
Routing Policy and Firewall Filters
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510730.6/warc/CC-MAIN-20230930213821-20231001003821-00112.warc.gz
|
CC-MAIN-2023-40
| 1,669 | 12 |
https://ftp.dk.debian.org/ldp/HOWTO/Printing-Usage-HOWTO-7.html
|
code
|
This is a section of references on the Linux printing system. I have tried to keep the references section of this HOWTO as focused as possible. If you feel that I have forgotten a significant reference work, please do not hesitate to contact me.
Before you post your question to a USENET group, consider the following:
If any of the above are true, you may want to think twice before you post your question. And, when you do finally post to a newsgroup, try to include pertinent information. Try not to just say something like, "I'm having trouble with lpr, please help." These types of posts will most definitely be ignored by many. Also try to include the kernel version that you're running, how the error occured, and, if any, the specific error message that the system returned.
comp.os.linux.*a plethora of information on Linux
comp.unix.*discussions relating to the UNIX operating system
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00116.warc.gz
|
CC-MAIN-2023-14
| 893 | 5 |
https://lists.freedesktop.org/archives/pulseaudio-discuss/2008-January/001108.html
|
code
|
[pulseaudio-discuss] Help in setting up PA
rich.geddes at verizon.net
Sun Jan 20 11:49:25 PST 2008
Thanks for the response.
Currently, I'd like to use PA as a replacement for esound... to
basically send audio from different programs and from my audio capture
card to my audio playback card (capture and playback are on the same
card), taking advantage of mixing and syncing features of PA. Sending
sound packets out to the network is interesting and I'd like to try that
Here's the uncommented part of my /etc/pulse/default.pa file:
add-autoload-sink output module-alsa-sink device=hw:0 sink_name=output
add-autoload-source input module-alsa-source device=hw:0 source_name=input
### Load something to the sample cache
load-sample x11-bell /usr/share/sounds/gtk-events/activate.wav
### Load X11 bell module
load-module module-x11-bell sample=x11-bell
### Publish connection data in the X11 root window
Tanu Kaskinen wrote:
> On Sun, Jan 20, 2008 at 03:07:44AM -0500, Richard Geddes wrote:
>> Just installed the PA packages on Ubuntu 7.10, with alsa drivers.
>> Followed (I think) the steps for set up from the PA "Perfect setup" web
>> page... when I try to run audio through PA (aplay -Dpulse music.mp3) no
>> sound goes to the speakers, however, I can see quite a few udp
>> packets being pushed through eth0... How can I get PA to send those
>> audio packets back to my audio card?
> You don't mention what kind of setup you want. RTP stuff
> doesn't get loaded automatically, so I assume you do want to
> broadcast all your audio to the LAN. If that is correct,
> then the fix is probably quite simple. You probably have
> this line in your default.pa, if you followed the FAQ:
> load-module module-null-sink sink_name=rtp
> Instead of a null sink, you want to use an actual alsa sink.
> So do not load the null sink, but replace the 'source'
> argument of module-rtp-send with the name of the alsa sink's
> monitor source.
> If you need further assistance, please explain what kind of
> setup you want, and attach your /etc/pulse/default.pa.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the pulseaudio-discuss
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00344.warc.gz
|
CC-MAIN-2021-43
| 2,166 | 40 |
https://www.friv2k.com/generic-technology.html
|
code
|
Big Data is revolutionizing 21st-century enterprise with out anyone realizing what it really means. ADSL/DSL expertise makes use of cable wires to offer broadband connections to many homes such that, your broadband connection works when a phone line wire is inserted in DSL or ADSL modem. Educational technology is anxious with the systematic utility of science and expertise in the field of training and thus may be defined as the application of expertise to schooling as a way to additional the case of the latter. This newest line of motion cameras from GoPro has incorporated the latest expertise and it makes use of essentially the most advanced technology yet.
Technology could be transferred by bodily means, corresponding to posting or hand-carrying a doc overseas, or by carrying a laptop or memory system on which the technology is saved. I am going to speak about three matters which I discovered the most important in the improvement of our digital world; mobile technology, computer expertise and television know-how. Educational know-how shouldn’t be restricted to using audio-visual aids and does not symbolize merely academic hardware comparable to the sophisticated devices and mechanical devices used in education. The strength of the definition includes that it is ambiguous where ambiguity is critical.
So science is nice as by man discoveries with the weather of the world we have been able to design and make many inventions for the conveniences and comforts of the flesh. Indeed, the non-neutrality of expertise is often associated with an emphasis on the non-neutrality of its social usage reasonably than the non-neutrality of technical constraints on our purposes.
Whether word-of-mouth, pamphlets, telegraph, letters to the editor, phone, or snail mail, people have at all times been social, and so they have used the know-how of the era to accomplish this. None of the folks Bornstein profiles meet Toyama’s straw man definition of social entrepreneur as for-profit entrepreneur keen to step on the little folks to make a buck.
If technology to attain the tip state already exists, then justification of extra technology growth would require that such development result in increased advantages, corresponding to reduction of implementation value or danger, that might compensate for the projected price of the event. If we want to be much more specific, we’d take the Wiktionary definition of the time period, which seems to be extra modern and easily comprehensible, versus these in basic dictionaries such as the Merriam-Webster’s.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654012.67/warc/CC-MAIN-20230607175304-20230607205304-00404.warc.gz
|
CC-MAIN-2023-23
| 2,570 | 5 |
https://autonomiq.io/robust-analytics/
|
code
|
Reporting and Analytics
Visualize and Analyze Any QA process
Dashboards and predictive insights allow you to measure, monitor, and respond across your entire IT portfolio. Predict defects and quality through deep learning.
Centralize data from all development tools and business systems.
Run Test Frequency
Determine test run frequency. Pinpoint performance issues, reduce defects, and increase quality.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141717601.66/warc/CC-MAIN-20201203000447-20201203030447-00197.warc.gz
|
CC-MAIN-2020-50
| 403 | 6 |
https://simplifier.net/v3-codesystems-dstu2/v3-htmllinktype
|
code
|
HtmlLinkType values are drawn from HTML 4.0 and describe the relationship between the current document and the anchor that is the target of the link
This resource matches a canonical claim from this project.
Canonical claims are used to verify ownership of your canonical URLs.
SIMPLIFIER.NET version 22.214.171.124
Copyright © 2015-2022 Firely
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00582.warc.gz
|
CC-MAIN-2022-49
| 345 | 5 |
https://donmil.com/services/
|
code
|
We have Analytical Team which helps in carrying out Model Validation and Stress testing Related tasks.
We are working on various Projects related to Application Developments. These are time bound development work for tools required to carry out efficient work in Trading Industry.
Some of the Application Developed are in Text Mining, Machine Learning, and Artificial Intelligence.
504 B, Solitaire IT Park, Andheri, Mumbai – 400093.
9876 West Green Street
Avada WP Theme
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00242.warc.gz
|
CC-MAIN-2019-26
| 473 | 6 |
https://www.my.freelancer.com/projects/php/open-journal-system-small-update/
|
code
|
Small update of OJS Open Journal System
4 pekerja bebas membida secara purata $25 untuk pekerjaan ini
Hello! How are you? I am a website developer, I am very familiar with PHP, Python and Java. I have worked with a lot of website. Please contact me if your website is built by PHP or Python. Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00237.warc.gz
|
CC-MAIN-2020-05
| 299 | 3 |
https://flylib.com/books/en/2.749.1.287/1/
|
code
|
A configuration file is required for each Asterisk module you wish to use. These .conf files contain channel definitions, describe internal services, define the locations of other modules, or relate to the dialplan. You do not need to configure all of them to have a functioning system, only the ones required for your configuration. Although Asterisk ships with samples of all of the configuration files, it is possible to start Asterisk without any of them. This will not provide you with a working system, but it clearly demonstrates the modularity of the platform.
If no .conf files are found, Asterisk will make some decisions with respect to modules. For example, the following steps are always taken:
This appendix starts with an in-depth look at the modules.conf configuration file. We'll then briefly examine all the other files that you may need to configure for your Asterisk system.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00750.warc.gz
|
CC-MAIN-2023-06
| 894 | 3 |
https://graphicdesign.stackexchange.com/tags/cs5/hot
|
code
|
You could use slices.
You could set up artboards for each object. Or just adjust the artboard to fit only the object you want to export and then tick the "clip to artboard" option when saving/exporting.
You could hide everything you don't want to export first:
Shift-click the art you want to export
Choose Object > Hide from the menu
Layer masks are located under the channels tab.
Copy the contents of your layer by selecting it then pressing Ctrl+A to select all followed by Ctrl+C to copy.
Select the layer that you want to mask and create a new mask by clicking the "add layer mask" icon at the bottom of the layers panel.
Go to channels tab (at the top of the layers panel), and select ...
The file must have a Timeline or Frame sequence. (Window > Timeline)
You need to set the animation options in the Save for Web dialog:
In addition, it is possible to have a quick animation and not see it initially due to speed and duration if the Save For Web options are set to "once". You may need to reload a page/image to see the animation.
Also, some ...
I remember struggling with this in CS3. I think the same fix will still apply in CS5.
So here's what to do...
First you create the type as you did in your example.
You should see 3 small stripes outside of your circle. ( 2 with a small square on it, and 1 without the square) These are just indicators for where the text starts and ends
grab the selection ...
There are several ways to curve text in Illustrator, but the easiest is to select your text (It does not need to be outlined) and go to Effect -> Warp -> Arc... in the main menu.
The harder way (Not really that hard) is to draw an oval and use the Text on a Path tool to add text onto the ovals shape. The benefit of this method is that your text curves ...
It was driving me crazy too... you can deselect the Align to Pixel Grid checkbox on the Transform panel (shows up on "show options"), but new objects will always retain the snapping behaviour.
To turn it of permanently, click on the flyout menu at the top right of the transform panel then uncheck Align new objects to pixel grid.
There is another easier (imo) way to do this. Create a new layer mask for the layer you wish to apply the mask to. Click on the mask in the layer panel, then go to image > apply image.
This allows you many options, including adding layers from any open document, controlling opacity, blending modes, channels, etc.
In this case, if you already have your ...
No. But, you can make a hotkey for it.
From top menu: Edit > Keyboard Shortcuts.. Ctrl+Alt+Shift+K
Just select Palette menus from the drop down list and then Animations. Once you've given a hotkey press Accept.
( Make sure to listen to photoshop when it warns you if the inserted hotkey combination would override any existing ones. You can try to use ...
To make the text box a different color, select a corner of the text box with the Direct Selection tool (the white arrow). Adjust the color/stroke normally.
To create a margin between the box edge and the text itself, select the box with the regular Selection tool (black arrow), then go to Type > Area Type Options, and adjust the Inset Spacing under the ...
THIS HAS BEEN UPDATED IN VERSION 23.0.1 You have to turn OFF "snap to grid" behavior. The preferences for alignment of objects are in three DIFFERENT places:
In VIEW menu uncheck "Snap to Point" (NOTE: this has moved in latest version to Preferences menu (see #3 below)
In the TRANSFORM PALETTE un-check "Align to Pixel Grid"
Other related options in ...
Choose Direct Select tool (A) which I call the ANCHOR tool for my students.
Press + to add an anchor. (Changes direct select tool to pen + tool).
Click on the stroke half way between the two anchors you want to remove the stroke from.
I've been playing around with Content Aware Scale after being inspired by this question, and have what I think is a great solution, quite different from my first, and an awful lot easier! A one minute solution.
Draw a rough selection around the car, I simply used the Polygonal Lasso
In the Channels palette click the "Save selection as channel" button
Delete the line connecting the 2 anchor points on either end of the "strokeless" side. It will leave an open shape, but shouldn't cause any major issues. The stroke won't be applied to the open edge.
The other approach is to expand the stroke as a separate object and modify it separately from the fill object.
To add in words, Since Export Layers to Files is run by some script all I had to do was find that script, then find the function which saves the layers to files, find which part of the function does the numbering prefix & comment it out.
So here are the steps -
on Mac running Lion, goto Applications > Adobe Photoshop CS5 > Presets > Scripts &...
There's a script for that by the awesome John Wundes (no affiliation).
It's called Set ALL the things, explained here, and lets you set width and height for selected objects.
It can set a whole bunch of other values for selected items, too, if you know the names for them (or, if you look up their names in the Illustrator Scripting Guide or in that linked ...
Photoshop's not the first tool I'd use to do something like this (Illustrator would be my choice), but you can achieve those results by stroking a path using a square brush with the right settings.
Create your circular path using the Ellipse tool.
Select the Brush tool and load the Square Brushes brush set. Select one of the square brushes.
Open the brush ...
There are a couple of ways to do color separations for screen preparation in Illustrator. You're trying to do it the more complicated way, so I'll walk you through how I'd do that first.
Let's start with a simple text object that has a fill and a stroke on top of a rectangle:
The Long Way
Step 1: BACK UP YOUR ORIGINAL ARTWORK!
This process will make ...
There's a script for that. (this is probably the script Joonas' comment alludes to - works just fine in CS6).
(to then fit the art board after fitting the text box, use the art board tool and click on the text box)
Courtesy of Kelso Cartography who have loads of great scripts (their scripts to switch point and area text are also highly recommended), you ...
You are going to kick yourself I think when I tell you how to solve this; I think: Click on the actual path, not the area inside the shape with the Area Type tool.
One nice thing about Illustrator is that you do not need to select the Area Type tool, as Illustrator will automatically change to it when you mouse-over the path.
If you would rather type ...
Effect -> Stylize -> Inner Glow. Change the mode to Multiply. Click on the little square next to the Mode dropdown and select a suitable black from the color picker. Fiddle with the other settings as needed.
Open the flyout menu in the Color panel and click on CMYK. The Color panel stays in whatever mode it started in, or is switched to. This doesn't affect the color mode of the document or the color; it's just a different way of describing the color.
If you used RGB swatches in your document, you'll find that double-clicking on a swatch after switching to CMYK ...
There's a very important distinction between the document color modes many aren't aware of.
When you open an RGB document color profile, all the swatches, symbols, brushes, etc are RGB items.
When you open a CMYK document color profile, all the swatches, symbols, brushes, etc are CMYK items.
When you switch Document Color Modes mid-stream, all those ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00502.warc.gz
|
CC-MAIN-2020-05
| 7,503 | 71 |
https://music.stackexchange.com/questions/64779/piano-tuning-tuning-pin-is-far-too-small-for-tuning-lever/64822
|
code
|
I wanted to learn to tune a piano by myself, and bought a relatively cheap size #2 star tuning lever (~$40) for the job. The tuning pins are far too small for my lever.
I am looking for either a place I could find tuning levers of appropriate size or a substitute that will not damage my piano (I have been looking at harp tuning tools as an option).
The tuning pin is so small that it does not catch on the tuning lever (I am able to rotate the tuning lever freely almost without resistance).
With some quick estimation done by a bit of scotch tape and pencil, I estimate that the tuning pins are likely a full millimeter below size 2. I do not think the lever is at fault, as I have some spare #2 pins that fit perfectly with the lever.
If it is of any help, the piano is made by Young Chang (a Korean manufacturer) and has the letters "CM110" on its frame.
Edit: The pins don't appear to be worn down in particular. The piano has been succesfully tuned by professionals before.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506045.12/warc/CC-MAIN-20230921210007-20230922000007-00489.warc.gz
|
CC-MAIN-2023-40
| 980 | 6 |
https://engcourses-uofa.ca/books/introduction-to-solid-mechanics/finite-element-analysis/fea-in-one-dimension/interpolation-order/
|
code
|
FEA in One Dimension: Interpolation Order
As shown in the previous chapter, the approximate methods involve assuming a particular form for the unknown variables such that the approximate solution has a finite number of unknown parameters. The approximate solution, or often termed trial function, that were used in the previous section were often polynomials which are continuous and differentiable along the whole domain (Figure 1a). The finite element analysis, however, involves using piecewise linear (piecewise affine) or piecewise nonlinear functions for the approximate solution (Figure 1b). Afterwards, a weak formulation of the problem is solved using the Virtual Work method (which as shown previously is equivalent to the Galerkin method).
In traditional finite element analysis, the values of the displacements at specific points (nodes) across the domain are the main unknowns to be calculated using the method. The displacements at intermediate locations between the nodes are interpolated according to a chosen interpolation function. The interpolation functions are classified according to their differentiability. A interpolation function is an interpolation function that can be differentiated n times while an interpolation function that is discontinuous across nodes is termed . Figure 2 shows three nodes (node 1, node 2 and node 3) and three different interpolation functions for the displacement at intermediate locations between the nodes. Figure 2 illustrates discontinuous , continuous , and once differentiable interpolation functions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00262.warc.gz
|
CC-MAIN-2023-50
| 1,562 | 3 |
https://www.neowin.net/news/new-windows-phone-7-videos-illustrate-office-productivity-features/
|
code
|
New videos of Microsoft’s upcoming mobile OS have surfaced demonstrating its strong depth in productivity features. The two short clips from Mobility Digest show real world scenarios that users would encounter when using Windows Phone 7.
The first clip demonstrates the native email client browsing through different categories and mass deleting messages, but then goes on to illustrate its tight integration with Office as the user directly starts editing a powerpoint presentation and resends it as an updated attachment. Calendar features are demonstrated as well with the user accepting an invitation to a meeting.
The second clip gives a quick overview of the Office Hub, starting with a demonstration of OneNote and how user would edit a OneNote document and add inline voice recordings. It goes on to show the general interface of the hub and how documents are organized.
Check them out for yourself:
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00330.warc.gz
|
CC-MAIN-2022-40
| 909 | 4 |
https://www.knowyourhumanrights-domesticabusesurvivors.co.uk/my-rights/what-right-do-i-have/the-right-to-private-and-family-life-home-and-correspondence/
|
code
|
This right is protected by Article 8 in the Human Rights Act.
This right also protects well-being, choice, relationships, privacy and communication.
Some examples include:
Yes. But if a public official is deciding to restrict your right, they must go through a test. They must be able to show that the decision is:
You can ask the public official about their decision or action and and ask them to tell you how it was lawful, legitimate and proportionate.
If you can think of a way to deal with this situation or decision that is less restrictive to you then you can raise it with the public official as the decision may not be proportionate.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818711.23/warc/CC-MAIN-20240423130552-20240423160552-00408.warc.gz
|
CC-MAIN-2024-18
| 642 | 6 |
https://www.slant.co/versus/4432/23178/~todomvc_vs_quire
|
code
|
When comparing TodoMVC vs Quire, the Slant community recommends Quire for most people. In the question“What is the best cross-platform to-do list app?” Quire is ranked 36th while TodoMVC is ranked 64th.
Ranked in these QuestionsQuestion Ranking
Pro Highly customisable
Pro Shows best practice coding examples in many frameworks
The website includes a number of sample executions in a wide range of frameworks.
Pro Open source
Open source software creates the opportunity for user customizations.
Pro Cross platform, sync and offline usage
Pro Organizations, projects, task, hierarchical subtask and smart folders
Unlimited tasks/todo structures
Pro Integration with calendar and github
I haven't dabbled with that though
Pro Multiuser handling
Handle multiple users and assignments
Pro It is free
As of now, at least.
Con No cloud sync out of the box
There is no cloud functionality.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823565.27/warc/CC-MAIN-20181211040413-20181211061913-00054.warc.gz
|
CC-MAIN-2018-51
| 887 | 18 |
https://docs.oracle.com/cd/E19225-01/820-5822/bybfa/index.html
|
code
|
A forensic query can search for either User or Role objects. The query can be very complex, allowing the author to select one or more attribute conditions on related data types. User forensic queries can search attributes with the data types of User, Account, ResourceAccount, Role, and Entitlement, and WorkItem. Role forensic queries can search attributes with data types of Role, User, and Work Item.
Within a single data type, all attribute conditions are logically ANDed, so that all conditions must be met for a match to occur. By default, matches are ANDed across data types, but if you select the Use OR check box, the matches across data types are logically ORed.
The warehouse may contain multiple records for a single User or Role object, and a single query could return multiple matches for the same user or role. To help differentiate these matches, each data type can be constrained with a date range, such that only records from within the specified date range are considered matches. Each related data type may be constrained with a date range, so it is possible to issue a query of the form:
find all Users with Resource Account on ERP1 between May and July 2005 who were attested by Fred Jones between June and August 2005
The date range is from midnight to midnight. For example, the range May 3, 2007 to May 5, 2007 is 48 hours. It would not include any records from May 5, 2007.
The operands (values to be compared to) for each attribute condition must be specified as part of the query definition. The schema restricts some attributes to have a limited set of potential values, while other attributes have no restrictions. For example, most date fields must be entered in YYYY-MM-DD HH:mm:ss format.
Due to the potentially large volume of data in the warehouse, and the complexity of the query, it may take a long time for the query to produce results. If you navigate away from the query page while a forensic query is running. you will not be able to see the results of the query.
In the Administrator interface, click Compliance in the main menu.
The Audit Policies page (Manage Policies tab) opens.
Click the Forensic Query secondary tab.
The Search Data Warehouse page opens.
Select whether to search user or role records from the Type drop-down menu.
Select the Use OR check box to cause Identity Manager to logically OR the results of each data type queried. By default, the system performs a logical AND on the results.
Select a tab that represents a data type that will be in the forensic query.
Click Add Condition. A set of drop-down menus displays.
Select an operand (condition to check for) from the left drop-down menu and the type of comparison to make in the right drop. Then enter a string or integer to search for. The list of possible operands is defined in the external schema. Refer to the online help for a description of each operand.
Optionally, select a range of dates to narrow the scope of the query.
Add more conditions as necessary to the currently-selected data type. Repeat this step for all data types that will be part of the forensic query definition.
Pick the attributes in the available attributes that you would like to display in the results of the forensic query.
Specify the a value in the Limit results to first field. When using conditions from multiple data types, the limit will be applied to the subquery for each type, and the final result is the intersection of all subqueries. As a result, the final result may exclude some records because of the limit on a subquery.
Click Search to run the forensic query immediately or Save Query to reuse the query. See Saving a Forensic Query for information about reusing your forensic queries.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890823.81/warc/CC-MAIN-20180121175418-20180121195418-00713.warc.gz
|
CC-MAIN-2018-05
| 3,703 | 21 |
http://www.airliners.net/forum/viewtopic.php?f=7&t=876933
|
code
|
I am a very amateur photographer for starters. Last year i spent 3 months flying around Europe, equipped with a Canon powershot A70. It had some default brand CF-card of 32MB. I made many pics, and zipped through them quite often to endulge. Very often however i got 'corrupt data' or 'memory error' warnings, and many photos were lost or appeared 'split'.
Now i am going to buy my own camera, Powershot A75, and need to choose CF-card. I am going away for three weeks soon (nearly 40 hours around airplanes for travel ), without chance of uploading to a computer, so i will need to rely on my CF-card not to mess up like that last one did. There are many reviews, but i still can't make up my mind. As i am a member of this forum i thought i'd ask the pros, maybe someone has tips for this amateur photographer . I am looking to buy at least one 256MB card, maybe a second one(smaller). My criteria are not especially speed, but more reliability and stability. I expect to do alot of browsing though the pics in the meantime, i hope this does not damage the stored photos. choices in NL are Kingston, lexar, Dane-Elec, Sandisk (ultraII) and transcend.
cheers for any tips
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719453.9/warc/CC-MAIN-20161020183839-00383-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 1,172 | 3 |
https://southerntiertrail.org/behavioral-counseling-near-me-spring-hill/
|
code
|
traditional in-person treatment, Behavioral Counseling Near Me Spring Hill consisting of expense, therapist, and benefit choice. While there are other online treatment platforms readily available, BetterHelp sticks out for its big network of therapists and inexpensive pricing strategies. Ultimately, the choice in between online treatment and conventional in-person treatment comes down to personal preference and individual requirements.
Therapy can be beneficial for a vast array of mental health conditions. In this article, we’ll check out 10 various conditions that people might have and how treatment can help.
Anxiety is a typical mental health condition that affects countless individuals worldwide. Treatment can assist by providing a safe area to speak about your emotions and feelings. A therapist can help you determine negative thought patterns and habits and deal with you to establish coping techniques and positive routines.
Stress and anxiety is another typical mental health condition that can be debilitating. Treatment can help by teaching you relaxation techniques, such as deep breathing and mindfulness, and working with you to establish coping techniques to manage anxiety triggers.
PTSD, or trauma, is a psychological health condition that can develop after experiencing or experiencing a terrible occasion. Treatment can help by offering a safe space to process the trauma and establish coping methods to manage the symptoms of PTSD.
OCD, or obsessive-compulsive disorder, is a mental health condition identified by compulsive habits and invasive ideas. Treatment can help by teaching you how to determine and manage these thoughts and habits, as well as develop coping techniques to manage the signs of OCD. Behavioral Counseling Near Me Spring Hill
Bipolar illness is a mental health condition characterized by severe state of mind swings, ranging from depressive episodes to manic episodes. Therapy can help by offering assistance and guidance in handling these state of mind swings, developing coping techniques, and enhancing interaction skills.
Eating disorders, such as anorexia and bulimia, are mental health conditions that can have serious physical effects. Therapy can assist by addressing the underlying emotional and mental problems that contribute to the eating disorder, in addition to establishing methods to handle the physical signs.
Substance abuse can be a hard practice to break, but treatment can be a reliable tool in handling dependency. Treatment can help by addressing the underlying psychological and mental problems that add to substance abuse, as well as establishing strategies to manage cravings and sets off.
Relationship issues, such as interaction problems and dispute, can have a significant effect on mental health. Treatment can help by providing a safe space to talk about these concerns and develop strategies to improve interaction and fix dispute.
Sorrow and loss can be a tough experience to browse, but treatment can help by supplying assistance and guidance through the grieving process. A therapist can assist you determine and manage the emotions connected with sorrow and loss, along with establish coping strategies to move on.
Tension is a typical experience for many people, but it can have unfavorable influence on psychological health. Therapy can help by teaching relaxation methods and establishing coping methods to handle stress, as well as identifying and addressing the underlying emotional and psychological issues that contribute to stress.
In conclusion, treatment can be an effective tool in handling a wide variety of mental health conditions, from anxiety and anxiety to drug abuse and relationship problems. If you are fighting with your mental health, consider looking for the assistance and guidance of a certified therapist.
Seeing a therapist can have many benefits for a person’s mental health and health and wellbeing. Here are some of the advantages of seeing a therapist from a mental point of view:
Among the main advantages of seeing a therapist is increased self-awareness. A therapist can help you recognize patterns in your emotions, ideas, and behaviors, along with the underlying beliefs and worths that drive them. By becoming more familiar with these patterns, you can acquire a deeper understanding of yourself and your motivations, which can lead to personal growth and development.
Enhanced emotional guideline
Emotional regulation is the capability to manage and control one’s emotions in a healthy and adaptive method. Seeing a therapist can assist individuals discover and practice emotional regulation strategies, such as deep breathing and mindfulness, that can be valuable in managing hard feelings and lowering stress.
Better social relationships
Interpersonal relationships are a vital component of psychological health and health and wellbeing. Seeing a therapist can assist people enhance their interaction skills, assertiveness, and empathy, which can result in healthier and more satisfying relationships with others.
Increased analytical skills
Therapy can also assist individuals establish analytical skills. By dealing with a therapist, people can learn to technique problems in a more efficient and methodical method, identify prospective options, and make decisions that are aligned with their objectives and worths.
Self-confidence refers to a person’s sense of self-respect and worth. Seeing a therapist can assist people recognize and challenge unfavorable self-talk and beliefs that can add to low self-confidence. Through treatment, individuals can find out to develop a more sensible and positive self-image, which can cause increased self-confidence and self-worth.
Boosted coping abilities
Coping abilities are methods and methods that individuals use to handle tension and difficulty. Seeing a therapist can assist people develop and practice coping skills that are customized to their particular requirements and preferences. Coping skills can include mindfulness, relaxation techniques, problem-solving, and social support, to name a few.
Decreased symptoms of mental illness
Therapy can likewise work in minimizing signs of mental illness, such as anxiety, stress and anxiety, and post-traumatic stress disorder (PTSD). Therapists use evidence-based treatments, such as cognitive-behavioral treatment (CBT), dialectical behavior modification (DBT), and eye movement desensitization and reprocessing (EMDR), to help individuals handle signs and improve their overall lifestyle.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510481.79/warc/CC-MAIN-20230929022639-20230929052639-00235.warc.gz
|
CC-MAIN-2023-40
| 6,525 | 26 |
https://www.writingforums.org/threads/top-referrers.15104/
|
code
|
I was under the impression that this stastic available through the User Control Panel / Users was for how many referrals you had made, i.e. new members quoting you as the reason they joined as we have invites to send direct from the site encouraging our associates to register. However, I suspect I was mistaken as I appear on this list of top referrers yet have never referred anyone to the site as above (it's because I have have few friends rather than being a slight against the site, sniff.). Therefore, I am right to think that this statistic is derived from how many times we approve (or disapprove) of members through the reputation balance icon? Yours inquisitively.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805541.30/warc/CC-MAIN-20171119095916-20171119115916-00456.warc.gz
|
CC-MAIN-2017-47
| 675 | 1 |
https://coffeecode.net/for-the-paranoid-deleting-flash-local-storage-objects.html
|
code
|
I'm reasonably careful about the cookies I accept from Web sites - I don't want companies to be able to track every site I visit, for example, so that they can build a nice little profile about me. It's for the protection of the companies more than anything else: someone there might die of extreme boredom following the trail of "Evergreen", "Linux Weekly News", "Python docs"...
However, I recently learned about Flash "local storage objects" (LSO), which are similar to browser cookies but capable of storing much richer information and also completely inscrutable in terms of the effectiveness of Adobe's security model. Is Flash really capable of preventing a Flash application running on microsoft.com from accessing an LSO from mail.google.com? I certainly don't know, and as Flash is a closed-source application it's hard for anyone except for the developers at Adobe to know--but I bet there are people extremely motivated to find out. (Insert obligatory "See? Closed source sucks!" comment here.)
So, in my crude attempt to prevent too much garbage accumulating due to the occasional YouTube video or NBC Saturday Night Live skit that I might watch, I've added the following rules to my cron entries to delete my entire set of LSOs every four hours:
5 ∗/4 ∗ ∗ ∗ rm -fr /home/dan/.macromedia/Flash_Player/#SharedObjects 5 ∗/4 ∗ ∗ ∗ rm -fr /home/dan/.macromedia/Flash_Player/macromedia.com/support/flashplayer/sys/
You Windows users can probably do something similar, but I haven't bothered to track that down yet. Sorry.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00085.warc.gz
|
CC-MAIN-2024-18
| 1,545 | 5 |
http://www.keil.com/support/docs/3860.htm
|
code
|
ARM: Resource Requirements of Software Components
Information in this knowledgebase article applies to:
As a system architect, I need to know the resource requirements of a software component so that I can configure the overall system. An exact number in bytes is not my expectation, but I do need an approximation of the component size. The resource requirements of a software component are composed of:
It is important to understand the requirements of a software component and in this context I have several questions:
The MDK middleware documentation explains how to configure the various resources and each middleware component contains a Resource Requirements section that describes requirements for various configurations.
As for the tool support, the following functionality is available today:
Last Reviewed: Tuesday, April 4, 2017
of your data.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00015.warc.gz
|
CC-MAIN-2019-30
| 854 | 8 |
https://www.codium.ai/glossary/ci-cd/
|
code
|
What is CI/CD?
Continuous Integration and Continuous Deployment, abbreviated as CI/CD, serves as a pivotal practice within modern software development.
In more depth, Continuous Integration (CI) is the method by which developers regularly amalgamate their code alterations into an integral repository and subsequently initiate automated builds and tests using various continuous integration tools. The core objective of CI is to swiftly pinpoint and rectify bugs, enhance software quality, and ultimately diminish the duration necessary for validating/releasing fresh updates in the process.
Continuous Deployment (CD) is an extension of the CI concept; it automates the deployment process for all code changes to production after passing through the build stage. In effect, this strategy enables developers to deploy any change that successfully navigates automated tests automatically, hence promoting a swift and efficient development cycle.
Another form of CD automates the release process by enabling swift and sustainable deployment for new changes. However, a crucial distinction is that it does not automatically deploy every change to production. This approach requires a manual trigger for deployment – thus offering heightened control over the timing and method of releasing new features.
- CI/CD is a graduate-level approach that guarantees superior iterations.
To summarize, the CI/CD methodology, through its robust CI/CD pipeline and effective use of CI/CD tools, empowers software development processes for increased efficiency and elevates product robustness and reliability through more frequent code changes.
How CI/CD work
- Code Commit: The concept of Continuous Integration (CI) is defined as a practice where developers integrate code into a shared repository frequently (each integration triggers an automated build and test process).
Developers actively commit code changes to a shared repository, often multiple times within a day. Version control systems-such as Git-typically manage these repositories.
- Automated Testing: Each code commit triggers an automated build process. This process includes a crucial step of running tests to identify errors and integration issues early.
- Immediate feedback: Developers actively receive immediate feedback from these tests. Feedback ensures not only code quality but also the constant deployable state of the main branch in the repository. This continual testing, an integral part of their workflow, facilitates swift corrections upon issue detection. An approach that enhances efficiency and effectiveness within development operations.
- Continuous Deployment: You automatically deploy every change that passes automated tests of the CI phase to the production environment. This approach ensures a rapid and consistent flow of improvements into production – an ideal strategy.
- Continuous Delivery: In the second step, you should automate the software release process up to production. Additionally, prepare deployment-ready changes for release, yet manual intervention remains necessary to deploy them into production. This approach provides a superior degree of control over the timing and manner in which you introduce alterations.
Infrastructure as Code (IaC)
This concept revolutionizes the management and deployment of infrastructure through automated processes. IaC encapsulates all aspects of traditional system administration in a programmable format: operating systems, virtual machines, networks – even complex environments can be defined with code.
Automating and treating the setup and configuration of infrastructure like code is a common practice in CI/CD. Through such methods, successful testing and deployment become achievable.
The process of Monitoring and Feedback
Involves constant observation, analysis, and provision of constructive criticism. It’s an integral part of enhancing performance-both individually and within organizational contexts.
After deployment, you utilize continuous monitoring tools to guarantee optimal performance of the application in its production environment. These instruments provide feedback, enabling swift identification and resolution of any potential issues-thereby accomplishing a seamless CI/CD loop.
The primary objective of CI/CD is to enhance the speed, efficiency, and reliability of software development and deployment. This process automation minimizes manual errors; it amplifies team productivity, speeding up the release of features and fixes in software.
Importance of CI/CD
In the realm of software development, one cannot overstate the importance of CI/CD; this methodology-owing to its numerous benefits-plays a crucial role in modern practices:
- Improved Code Quality: Continuous Integration (CI) ensures frequent testing of the code, thereby automating bug detection and facilitating early resolution. This process ultimately elevates the final product’s quality by regularizing integration and testing phases-an approach that substantially decreases the chances of encountering critical issues during the release phase.
- CI/CD: through its ability to automate integration and deployment processes – accelerates the release cycle of features and bug fixes. With this tool in hand, teams not only push updates more frequently but also with increased confidence. A strategy that sharpens their responsiveness to market fluctuations as well as user feedback.
- Automation in CI/CD increases development efficiency by eliminating manual tasks; this empowers developers to concentrate on writing and enhancing code. The resultant streamlined process shortens the development lifecycle, thereby enabling teams to expedite feature delivery: a productivity boost with decreased time investment.
- Enhanced Developer Collaboration: CI fosters a culture where developers proactively merge their changes regularly. This consistent merging strategy not only reduces conflicts but also cultivates an environment of continuous development; here, code is shared, reviewed, and integrated without interruption.
- With CI/CD, we make smaller and more frequent updates that significantly reduce deployment risks. CD facilitates immediate addressing of any issues by providing real-time feedback on these changes.
- CI/CD fosters the consistency and reliability of software build, testing, and deployment processes. This guarantee allows for reliable release of the software at any given time with minimal human intervention – thus reducing potential errors.
- By automating repetitive tasks and enhancing the development process’s efficiency, we can achieve better resource allocation – the strategy enables us to concentrate on project areas that contribute superior value.
- CD enables a quick feedback loop with end-users. Through the regular deployment of modifications and the ensuing immediate feedback reception, teams gain an enhanced understanding of user necessities – a crucial insight that guides them in tailoring the product accordingly.
- CI/CD processes enhance scalability and flexibility; as the project expands, developing efforts become more manageable. Additionally, integrating with a variety of tools and technologies—adapting to diverse project requirements-is facilitated by these methods.
CI/CD represents more than a mere collection of practices. They embody an entire cultural transformation within software development. This shift cultivates an approach that is not only agile and efficient but also laser-focused on quality. This becomes essential in a rapidly changing technological terrain characterized by its relentless pace.
CI/CD and DevOps
The DevOps philosophy relies on CI/CD as its backbone. CI/CD for DevOps is an integral part that facilitates many of the core principles. As a cultural and professional movement, DevOps underscores collaboration and communication-particularly between software developers and IT operations teams-with integration at its heart. The objective? To forge a more agile, efficient-and ultimately responsive-IT service delivery model.
The software development process streamlines through CI/CD, which directly aligns with DevOps goals: it accelerates time to market and enhances product quality and operational efficiency. Supporting DevOps, Continuous Integration ensures that code changes are integrated and tested rapidly and frequently – thus facilitating a more collaborative, transparent development procedure. This constant alignment of the merging/testing process with the principle of continuous improvement in DevOps perfectly reflects their emphasis on feedback.
CD takes this a step further by automating the deployment process, ensuring that new features and fixes are rapidly and reliably delivered to users. This method exemplifies the DevOps ethos-a continual flow between development teams and operations with minimal manual intervention that breaks down silos among different groups within an organization. By facilitating swifter, more regular releases, CI/CD not only speeds up the development cycle but also permits faster feedback from both end-users and operational units. Providing an invaluable resource for iterative development is one of DevOps’ core principles.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474482.98/warc/CC-MAIN-20240224012912-20240224042912-00263.warc.gz
|
CC-MAIN-2024-10
| 9,217 | 37 |
http://preciseinfo.org/Convert/Articles_Java/Lock_Experts/Java-Lock-Experts-120120094101.html
|
code
|
Re: Blocks for scope control
On 20.01.2012 02:54, Arved Sandstrom wrote:
Another decent reason, in my opinion, is if the management of them is
not disciplined. An organization that doesn't even wonder whether they
are disciplined enough to manage assertions in code isn't, and others
that do ask the question may decide that they are not.
I think this does not necessarily need to be handled on organization
level. Why make it so big? Every developer can help himself and their
colleagues by using them in a reasonable way.
What I mean by this is, assertions are easy to put in. They are not
always correct when put in, and they have to be maintained in any case
as code changes (or at least removed if necessary). Existing obsolete
assertions need to be taken into account when adding new ones in the
same class or package, because if you enable one then you may enable
more. Or you remove old ones you don't understand, if you trust yourself
to understand the business rules from 4 years ago well enough to
classify the assertions as being defunct.
Assertions also have the effect that they force you to think about
certain - possibly not obvious - properties of the code / class at hand
when you change the code. So while an assertion may look tricky it
actually helps you when modifying code to not forget important aspects.
This may be more tedious but it certainly helps code robustness in the
Let me put it this way: I would feel good about looking at code that had
assertions in it if I saw that they were also commented where necessary,
including traceability notes where that makes sense, *and* were
supported by unit tests that exercised the assertions.
I view assertions in part also as documentation. Often no additional
commenting is needed. I frequently have a private boolean method
"classInvariant" or with other (hopefully telling) name with checks and
has documentation of its own. In these cases you have the name of the
method plus documentation on the method.
remember.guy do |as, often| as.you_can - without end
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00453.warc.gz
|
CC-MAIN-2021-49
| 2,034 | 32 |
https://infostellar.freshdesk.com/support/solutions/articles/48000564032-creating-unavailability-windows
|
code
|
What are Unavailability Windows?
Ground station owners may preserve the exclusive right to use its own ground station by creating an "Unavailability Window" in StellarStation. Satellite operator cannot make a reservation of a pass when the period of the pass is covered by Unavailability Window. In other words, passes that is not covered by the period of Unavailability Window will be offered to satellite operator for reservation.
Ground station owners are requested to set unavailability windows before the satellite operator books their passes. You can set an unavailable window from 10 minutes later and beyond. On the other side, satellite operators with on-demand service can book their passes up to 7 days ahead, and satellite operators with reservation services can book up to three weeks ahead. You are able to cancel the booked pass if necessary, but the cancellation of a booked pass from the ground station owner is charged a penalty fee.
Access to Ground Station Console
Ground station owner can manage own unavailability windows through ground station console, CLI (Command Line Interface), or API (Application Program Interface).
The simplest way to manage the unavailability window is through ground station console. If you have not signed up StellarStation, or if you have not been allocated to your organization, please ask [email protected] to make your account allocated under the designated organization group.
If you are successfully allocated to the group, you will see the organization name in your account information.
Then, you can access to ground station console by clicking one of your ground station name from the left sidebar.
You can see the reserved pass schedule and unavailability window schedule in the calendar view. In the calendar view. Black square in the calendar view shows reserved pass from satellite operator, and blue square shows the unavailability window.
Note that time zone can be changed between UTC and your local time by clicking the clock icon.
Creating a new unavailability window through the web console
You can simply set a new unavailability window by clicking the "New Window" button at the top right. The recurrent window can also be set by selecting "Daily" or "Weekly" from the dropdown list.
Creating a new unavailability window through the command line interface (CLI)
You can also set and view the inserted unavailability windows through CLI. For more detail, please refer help text in the CLI by executing "stellar gs -h".
>stellar gs -h Commands for working with ground stations. Usage: stellar ground-station [command] Aliases: ground-station, gs Available Commands: add-uw Adds unavailability windows on a ground station. delete-uw Deletes unavailability windows on a ground station. list-plans Lists plans on a ground station. list-uw Lists unavailability windows on a ground station. Flags: -h, --help help for ground-station<br>
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00300.warc.gz
|
CC-MAIN-2023-50
| 2,901 | 15 |
https://origin.geeksforgeeks.org/how-to-fix-could-not-find-folder-tools-inside-sdk-in-android/
|
code
|
How to fix Could not find folder ‘tools’ inside SDK in Android?
There are many IDEs available for developing android development. Many developers prefer using Eclipse as an IDE for building android applications. While using Eclipse IDE for android development many times we will get to see an error message as Could not find folder tools inside SDK folder. In this article, we will take a look at 4 different ways to resolve this issue.
This issue might occur if the android SDK tools have not been installed properly within your system or some files of the android SDK tools have been corrupted. In that case, we can download that again. Try downloading android SDK tools from the Eclipse IDE itself. For that, we have to first install new software in Eclipse IDE. For installing new software navigate to Help in the top bar of Eclipse IDE, then click on Install new software, and a dialog box will appear. Inside that box simply type Developer tools you will get to see it on the below screen. Select the check box and click on Next. It will install some dependencies. Make sure you are connected to the internet.
After the downloading of files Eclipse will be restarted automatically. After eclipse has restarted we have to configure the ADT plugin. For configuration navigate to the window option in the top bar, then click on preferences. After that, you will get to see the below window.
If the android SDK path is not specified it will prompt to download SDK. Make sure to download the SDK. After that simply click on Apply and close. Now navigate to the SDK folder to see the tools folder within the SDK.
If the Eclipse IDE is installed properly without any network issues in between. The SDK folder is also downloaded. We have to simply navigate to Eclipse IDE. Click on Window > Preferences. We will get to see the window option in the top bar of Eclipse IDE. This will open the below screen.
Inside this screen, we simply have to copy the already specified SDK Location. After that navigate to that file location in file explorer. After that, we have to rename the platform-tools folder to tools. Then we have to restart the Eclipse IDE to solve the issue.
Many times while downloading the Android SDK from the Corporate network. Some files are not downloaded due to proxy jamming. In that case, we will not be able to see the tools folder within our SDK folder. In that case, try downloading the SDK folder over a private network to download all the necessary files required for IDE.
Try clearing the cache for Eclipse IDE. For that navigate to Eclipse IDE>Window>Preferences. In the left pane simply search for Remote resources. The option is displayed in the below screenshot. After that simply click on Refresh remote cache and then click on Apply and close and restart your IDE once again to solve the issue.
Please Login to comment...
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652207.81/warc/CC-MAIN-20230606013819-20230606043819-00344.warc.gz
|
CC-MAIN-2023-23
| 2,855 | 10 |
https://github.com/o19s/elasticsearch-learning-to-rank
|
code
|
The Elasticsearch Learning to Rank plugin uses machine learning to improve search relevance ranking. It's powering search at places like Wikimedia Foundation and Snagajob!
- Allows you to store features (Elasticsearch query templates) in Elasticsearch
- Logs features scores (relevance scores) to create a training set for offline model development
- Stores linear, xgboost, or ranklib ranking models in Elasticsearch that use features you've stored
- Ranks search results using a stored model
We recommend taking time to read the docs. There's quite a bit of detailed information about learning to rank basics and how this plugin can ease learning to rank development.
You can also participate in regular trainings on Elasticsearch Learning to Rank, which support the free work done on this plugin.
The demo lives in another repo now, Hello LTR and it has both ES and Solr example. Follow the directions for Elasticsearch in the README to set up the environment and start with the notebooks/elasticsearch/tmdb/hello-ltr.ipynb. Have fun!
See the full list of prebuilt versions and select the version that matches your Elasticsearch version. If you don't see a version available, see the link below for building or file a request via issues.
To install, you'd run a command like this but replacing with the appropriate prebuilt version zip:
./bin/elasticsearch-plugin install https://github.com/o19s/elasticsearch-learning-to-rank/releases/download/v1.5.4-es7.11.2/ltr-plugin-v1.5.4-es7.11.2.zip
(It's expected you'll confirm some security exceptions, you can pass
elasticsearch-plugin to automatically install)
If you already are running Elasticsearch, don't forget to restart!
As any other piece of software, this plugin is not exempt from issues. Please read the known issues to learn about the current issues that we are aware of. This file might include workarounds to mitigate them when possible.
Notes if you want to dig into the code or build for a version there's no build for, please feel free to run the build and installation process yourself:
./gradlew clean check ./bin/elasticsearch-plugin install file:///path/to/elasticsearch-learning-to-rank/build/distributions/ltr-<LTR-VER>-es<ES-VER>.zip
For more information on helping us out (we need your help!), developing with the plugin, creating docs, etc please read CONTRIBUTING.md.
We do our best to officially support
*.*.1 releases of Elasticsearch. If you have a need for "dot-oh" compatibility or a version we don't support please consider submitting a PR.
- Initially developed at OpenSource Connections.
- Significant contributions by Wikimedia Foundation, Snagajob Engineering, Bonsai, and Yelp Engineering
- Thanks to Jettro Coenradie for porting to ES 6.1
- Bloomberg's Learning to Rank work for Solr
- Our Berlin Buzzwords Talk, We built an Elasticsearch Learning to Rank plugin. Then came the hard part
- Blog article on How is Search Different from Other Machine Learning Problems
- Also check out our other relevance/search thingies: book Relevant Search, projects Elyzer, Splainer, and Quepid
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510387.77/warc/CC-MAIN-20230928095004-20230928125004-00513.warc.gz
|
CC-MAIN-2023-40
| 3,069 | 27 |
https://github.com/fish-shell/fish-shell/issues/1873
|
code
|
Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
Fish's handling of ^ still annoying to git users #1873
There was a change made in the past so that a caret '^' will only redirect stderr if it is the first character of a token:
Today, I was trying to perform a multiple-point branch compare as described at
The '^' was the first syntax I used. I didn't understand why it wasn't working.
Fish was redirecting the stderr to a file with the name of my branch and so git was showing me just the log of the first branch instead of showing the difference.
Not sure if anything can/should be done about it, but thought I would report it anyway.
From the recently referenced duplicate issue which is a slightly different input than previously discussed here:
If the hash value contains non-numeric values, the command works fine:
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00263.warc.gz
|
CC-MAIN-2020-34
| 929 | 10 |
http://depugesijugofo.bsaconcordia.com/java-open-file-for-writing-overwrite-a-file-4912949129.html
|
code
|
Renaming of moving a file may fail for various reasons, like the file being open, wrong file permissions etc. For more details, see our rules page. There is a special constructor for that which accepts a boolean argument to open the file in append mode.
This program shows an example using both Java SE 6 code and Java SE 7 code by using new feature try-with-resource statement to automatically close the resource. Here is how that is done: Use the "solved" flair instead. WriteAsync text End Using End Sub Example The following example shows how to write text to a new file and append new lines of text to the same file using the File class.
If you succesfully created the file from Example 1, running this program will get you the integer you entered. File — to be used with. When you are done with a file, you should close it by calling close.
You might want to create it now, right? When you create an input stream, if the file does not exist, then FileNotFoundException is thrown. Finally, we close the file. The File class contains the method mkdir and mkdirs for that purpose.
Different Whence in fseek. This means you can write new content at the end of the file. Do not post referral links to Amazon or other sites. They are just the file versions of printf and scanf. Because of security considerations, opening a file from a Windows 8.
We are a subreddit about learning programming, not about recommending hardware. This file contains two names at the beginning and after we run our program next set of names will be appended to it e. No Referral Links, no links through other sites and clicktrackers: The File class also has a few other constructors you can use to instantiate File instances in different ways.
The renameTo method returns boolean true or falseindicating whether the renaming was successful. When you open the file, you can see the integer you entered. File Before you can do anything with the file system or File class, you must obtain a File instance.
Baski Printer Friendly Format Java provides a number of classes and methods that allow you to read and write files. As I said previously, if you are starting fresh in Java then I suggest you to better follow a book because they provide comprehensive coverage, which means you can learn a lot of things in quick time.
If you are using Java NIO you will have to use the java. In this program, we have a file called names.
Since we are doing append two times, first by using Java 6 code and second by using Java SE 7 code, you will see a couple of names appended to file. FileIO - to be used with Windows 8. The first parameter takes the address of num and the second parameter takes the size of the structure threeNum.
To check if the file exists, call the exists method. Path - to be used on strings that contain file or directory path information. When an output file is opened, any preexisting file by the same name is destroyed.
If you see any posts or comments violating these rules, please report them. If the file does not exist, fopen returns NULL. The mkdirs method will return true if all the directories were created, and false if not.
Do not give out complete solutions. Abusive, racist, or derogatory comments towards individuals or groups are not permitted. Java Program to append text to existing File Here is our complete Java example to demonstrate how to append text to a file in Java.
Guide the OP to the solution, but do not solve it for them.The File class in the Java IO API gives you access to the underlying file system. Using the File class you can: Check if a file or directory exists. Create a directory if it does not exist.
Read the length of a file. Rename or move a file. Java create new file examples. Creating a new file in Java is a very easy task and most of us are aware of this. Let's look at 3 most commonly used ways.
Java create new file examples. Creating a new file in Java is a very easy task and most of us are aware of this. Let's look at 3 most commonly used ways.
HowToDoInJava. Is it possible to overwrite excel file without the overwrite prompt? without overwriting the file, need to create next copy for the same file name using ProcessStartInfo saving a file without overwriting the existing image.
[Java] Write to existing file without overwriting said text file (bsaconcordia.comrogramming) submitted 1 year ago by N1GHTMVR3 so this is a part of. Write the workbook to an OutputStream.
This will overwrite the existing file with updated data. Suppose that we have an Excel file (bsaconcordia.com) looks like this: Now, we are going to write Java code to update this Excel file by this manner: append 4. use a FileWriter instead.
FileWriter(File file, boolean append) the second argument in the constructor tells the FileWriter to append any given input to the file .Download
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00343.warc.gz
|
CC-MAIN-2018-51
| 4,809 | 20 |
https://www.pasc-ch.org/projects/2017-2020/virtual-physiological-blood/
|
code
|
Virtual Physiological Blood: an HPC framework for blood flow simulations in vasculature and in medical devices
PI: Petros Koumoutsakos (ETH Zurich)
Co-PIs: Bastien Chopard, Mauro Pezzè
July 1, 2017 - June 30, 2020
Blood flow is involved in most of the fundamental functions of living organisms in health and disease. It is essential for the transport of oxygen and nutrients, as well as of infectious parasites and metastasizing tumor cells, to tissues and organs. Blood flow has been studied for thousands of years. Observations and experiments have evolved from qualitative descriptions to precise measurements of blood flow rates in vivo. Despite remarkable advances, experiments have limitations on the type and detail of information they can provide for blood flow. The quantification of blood rheology, in particular in complex vascular geometries and in disease, is an open challenge. Even more, the prediction of important quantities such as shear stresses, margination and drug transport associated with blood flow in capillaries and medical devices are still today obtained by decades old empirical formulas with non-quantified uncertainties.
In the past twenty years simulations have advanced to complement experiments and have become an essential tool for investigations of blood flow in animal research and patient care. Simulations have provided insight and detailed quantitative information on the functioning of blood in arteries and capillaries and their effects on the surrounding blood vessels and tissues. Success stories include the elucidation of the inception of aneurysms and the devising of mechanisms for their repair. More recently simulations have been used to design microfluidic devices that aim to diagnose the transport of circulating tumor cells, a potent marker for cancer metastasis.
Despite such advances, we believe that there is significant room for improvement in terms of fidelity and clinically relevant scales for such simulations. For example, most large scale blood flow simulations to date have discarded the particulate nature of blood and its rheology and they have utilised ad-hoc, a-priori specified, Newtonian or non-Newtonian viscosity coefficients. Even particle based simulations of blood flow, with techniques such as Dissipative Particle Dynamics (DPD), have been found to predict quantities of interest that differ drastically from those obtained via experiments or from high fidelity simulations of canonical problems using boundary integral methods. Such discrepancies are to be expected as both experimental and numerical observations of Red Blood Cells (RBCs) behavior in flow have revealed very complex dynamics for individuals and populations of cells. We expect that approaches that allow micro-scale descriptions of blood at spatiotemporal scales afforded by the present and proposed work on DPD and Lattice Boltzmann (LB) could remedy this situation. Furthermore, we expect that fast and validated simulations of blood flow are also essential to design, optimize and understand micro-fluidic devices aimed at performing tasks such as blood separation, detection of Circulating Tumor Cells (CTC) and bacteria, and molecular recognition with high sensitivity and specificity. Simulations, such as those proposed here will help to advance the technology and have impacts ranging from clinical diagnostics to regenerative medicine, proteomics and organs on a chip.
The goal of this project is to provide computational tools for a virtual physiological blood flow in complex geometries pertinent for simulations in vasculatures and medical devices. The project combines the expertise of three research teams in Switzerland (University of Geneva, ETHZ, USI) to provide a portable and performing simulation tool for blood flow, with fully resolved RBCs and platelets. We will deliver integrative and scalable HPC framework, capable of performing validated simulations at the micro- and macro-scale and overcoming several of the limitations of existing software in terms of time to solution, portability and ease of use by non-computing experts. We will base our developments on two state of the art codes: udeviceX that uses DPD allowing for micro-scale descriptions of individual RBCs and Palabos that employs LB methods for meso- and macro-scale continuum flows. We will validate systematically DPD and LB simulations tools using relevant experiments and high fidelity simulations involving boundary integral methods. Our validation and uncertainty quantification studies will recognise and tackle the heterogeneity of experimental and computationally available data. We will couple DPD and LB methods in order to provide a multi-scale description of the flow. Our goal is software that can be easily used and to conform with the decision of the user to tackle blood flow in the particulate, the continuum or the hybrid level. We expect to provide a framework of validated computational models that can assist scientists and clinicians in understanding blood flow and at the same time for designing devices, such high throughput micro-fluidics used to diagnose blood borne diseases.
Currently there are several teams (including Biros, Hoekstra, Karniadakis, Kaxiras, Melchiona, Sotiropoulos and Succi) that develop and use software for large scale continuum or particle based simulations of blood flows. We mention the group of Karniadakis (Brown University) who has pioneered the use of DPD for blood flow simulations and the group of Alfons Hoekstra (University of Amsterdam) who has performed extensive simulations using LB methods. These groups are among our collaborators and we plan to continue our interactions with them. The group of Biros has performed state of the art simulations of RBCs using Boundary Integral Methods (BIM). We have established a contact in order to use such high fidelity simulation to assess the capabilities of DPD methods in resolving canonical problems involving the interaction of RBCs. We believe that this project will distinguish itself by combining validated DPD and LB approaches with quantified uncertainties along with a proper software engineering framework that we feel is lacking from most engineering driven approaches.
We note that DPD and LB are techniques that are used across many disciplines (from fluid dynamics, to traffic simulations and materials science) and we will develop software that is capable of allowing such trans-disciplinary use of our codes. Furthermore, we expect to make such portability evident and hope to establish these sate of the art developments as working tools for blood flow simulations in the supercomputing platforms at CSCS. We plan to continue our tradition of making software open source (both Palabos and uDeviceX are openly available at github) and to provide the necessary documentation such that the code is readily usable and accessible on multiple platforms and by users with varying expertise in supercomputing. Our team has expertise in this field through the design and implementation of integration middleware for grid (LAMMPS) and particle based simulation engines (MRAG) developed within the PASC project “Angiogenesis in Health and Disease: in-vivo and in-silico”.
On the application side, our team has established collaborations with clinicians and experimentalists that can guide and use our developments. There is an existing tight collaboration between the Chopard group and Prof. Karim Zouaoui, biologist at ULB and CHU Charleroi, who is performing flow chamber experiments with whole blood. This collaboration started in the FP7 THROMBUS project. Another strong link with the medical community is given by the H2020 CompBioMed project in which B. Chopard is partner. CompBioMed is a center of excellence for HPC biomedical simulations whose goal is to provide clinicians and medical companies with access to HPC codes helping them to optimize devices, to choose the best treatments for a disease, and of course to bring a better understanding of many physiological processes. The group of Petros Koumoutsakos has tight collaborations with the group of Mehmet Toner at Harvard Medical School, on the design of high throughput micro-fluidic devices for the capturing of Circulating Tumor Cells. Further collaborations include the group of Mauro Ferrari at Southern Methodist University on simulations of RBCs in artificial capillaries and with the group of Luisa Iruella-Arispe at UCLA on blood flow in the retina. Chopard and Koumoutsakos are the Swiss coordinators of the OpenMultiMed COST action that can be a platform for sharing and discussing the results of this project.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00855.warc.gz
|
CC-MAIN-2023-40
| 8,645 | 11 |
http://knowledgelayer.softlayer.com/procedure/installing-windows-server-virtualization-windows-2008
|
code
|
Windows 2008 64-bit edition comes with the option of installing Windows next generation virtualization application codenamed Veridian. By default the application will not be a selectable option when Adding and Removing Roles from Windows 2008.
Please note that at this time, this is only available in full installation of 64-bit editions of Windows 2008. 32-bit versions and Server-core installations do not support this.
Hardware assisted Virtualization Technology enabled in the BIOS
Data Execution Prevention enabled in the BIOS
Intel Execute Disable (XD)
AMD No Execute (NX)
To enable the Role option for Windows Server virtualization a few patches must be installed.
Open an explorer window and browse to %sysdir%\Windows\wsv, usually C:\Windows\wsv. Two files will be located in that folder:
These can be installed in any order. Install both patches and then reboot the system.
Once the system has completed its reboot you will need to add the Role to the system. Please see Adding and Removing Roles on how to add the role and begin the installation.
After adding the role and clicking next, the Create Virtual Networks dialog box should appear.
Here's where things get a little wierd and attention must be paid as network connectivity will be lost for a short period of time.
Select Local Area Connection which should be your private network adapter. Click continue. The installation will continue and require you to reboot. After the reboot, log into the system via the Public connection. You must log in with the same user as you used to install this.
The Resume Configuration Wizard should start up to finish the installation. At this point the networking protocols that are assigned to the network interface you choose will be unbound. You will lose network connectivity to that interface. If not, you probably recieved the following error:
"Attempt to configure Windows Server Virtualization failed with error code 0x80078000."
To resolve the error, go to Start >> Programs >> Administrative Tools >> Windows Virtualization Management. This is the new management console for Windows Virtualization.
Click on the server in the right hand pane. Then in the action pane, click Virtual Network Manager. This will bring up a new dialog box, Virtual Network Switch Management:
In the left hand page click the network switch under Add New network switch.
Rename it to private and for Connection select Physical network adapter and select the 1st network adapter. After this, all network protocols will be unbound from the private network interface.
IMPORTANT PART In order to re-establish network connectivity to the private side we will need to configure the new switch device and NOT the private interface. Go to Start >> Settings >> Network Connections. A new device called Local Area Connection 2 should appear and its device is a switch.
Right click this item and go to properties. Select ipv4 and its properties. You will need to configure this device with the private network interface ip address, netmask, and dns servers. After this is complete, click ok and close. This should re-enable networking on the private side. Verify this by pinging the private ip.
RDP to the private IP to setup the public network.
Adding a public switch is the same as adding the private one. Go back to Virtual Network Switch Management and select Add new network switch, select external as the network switch type and click add. Rename the switch to Public, select Physical network adapter, and then select the second network adpater. Click ok. This will cause the public port to not respond anymore to the network. Configure the new public switch interface just like you did with the private one with the proper settings.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891886.70/warc/CC-MAIN-20180123091931-20180123111931-00303.warc.gz
|
CC-MAIN-2018-05
| 3,718 | 23 |
https://appwrite.io/docs/tooling/command-center
|
code
|
The Appwrite Command Center is designed to improve the developer experience by enabling straightforward navigation and exploration of features, settings, and sections of the Appwrite Console. The Command Center is enhanced with AI capabilities and is the home of the Appwrite assistant. It allows you to execute tasks and access features within the Appwrite Console efficiently using keyboard shortcuts and advanced context-aware search.
You can access the Command Center by pressing ⌘ + K on Mac or Ctrl + K on Windows and Linux devices or by clicking the search icon in the Console top navigation bar. A modal will appear, presenting a search input and a list of commands relevant to your current Console context.
The Command Center emphasizes keyboard navigation. You can browse through commands using the up and down arrow keys and execute them with the Enter key. The search input lets you quickly filter and find specific commands or entities within the Console. Additionally, some commands have dedicated keyboard shortcuts that can be used for immediate execution without opening the Command Center.
The Command Center includes a variety of navigation commands that are also useful for exploring the different options and features the Console offers. You can quickly access different sections like Databases, Auth, Security, and Functions screens using the Command Center. You will also find context-sensitive commands on each page that adapt based on your current location within the Console, providing relevant options and shortcuts.
The Command Center offers context-sensitive commands for creating entities like buckets, functions, database attributes, etc. Specific commands trigger the opening of new panels, facilitating deeper interaction and task completion directly from the Command Center.
An integral part of the Command Center is the Appwrite AI Assistant, trained on Appwrite's extensive documentation, content, and knowledge base. The Assistant can answer Appwrite-related queries with detailed explanations, step-by-step instructions, and relevant code snippets, enhancing your ability to utilize Appwrite quickly and efficiently.
Many developers favor keyboard interactions for efficiency and speed. The Command Center was designed with keyboard optimization in mind. It caters to the needs of keyboard-centric developers, enabling various tasks and efficient navigation across the Console without relying on a mouse or trackpad.
You can use your up and down arrow keys to navigate between different commands and your Enter and Escape keys to enter and exit specific context screens. The Command Center also includes many built-in shortcuts that can be used from any console screen and allow greater productivity.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00863.warc.gz
|
CC-MAIN-2024-10
| 2,740 | 8 |
http://www.bikeforums.net/training-nutrition/567000-dialing-my-hrm.html
|
code
|
Just to be clear, it looks like you are aware you can change the default settings for your zones on the F11, and you just don't know what you should set it to, right?
Also, I'm not quite clear on your usage of "max." Generally, "max HR" means the highest you can possibly get your HR at maximal effort (e.g. after repeated full-out uphill sprinting). The setting in the F11, however, is a suggested upper training threshold--something completely different, which probably corresponds to maybe 75% of what the F11 guesses your max HR to be.
Furthermore, I don't think the F11 is cycling-specific (F stands for fitness), and max HR and training thresholds are sport-specific, so that's something else to consider. Furthermore, your desired HR for training will vary based on if you are base-building, tempo-riding, doing anaerobic training, etc. I suspect the number they dial in by default corresponds to zone 1 or 2 running (not cycling) according to population averages with the gender, age, height, and weight you have in the settings.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190181.34/warc/CC-MAIN-20170322212950-00476-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,037 | 3 |
https://odesk.com/o/jobs/job/_~01298a98c19866ef37/
|
code
|
Company group with two branches need a website 1 for each branch both of these shall have a graphical connection with eachother (You will be provided with a graphical framwork). One branch fokus on interior fitting and the other on software developing in bussiness software.
Using Cakephp and Twitter-bootstrap. Much focus on design and connectivity to social media like facebook and twitter. If you have experiance of webadvertising or sales its a bonus.
A skype interview will be needed, skype id: marcus-confac
Skills: design, facebook, twitter
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021342244/warc/CC-MAIN-20140305120902-00055-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 547 | 4 |
https://crowdhealth.eu/deliverables-type/report
|
code
|
Jump to navigation
Based on the detailed D2.1 report, this document extends state of the art in the realization of the mechanisms and algorithms of the CrowdHEALTH platform – a secure ICT platform that incorporates the collective knowledge from the multiple heterogeneous sources and its combination with situational awareness artefacts- based on holistic health records, heterogeneous data aggregation systems and algorithms, big data analysis and storage, mining, forecasting and visualisation, and finally policy development toolkits.
This document is the causal analysis framework of the health policies toolkit and it will be used as the base for the development of the software prototype that applies to the health analytics layer in CrowdHEALTH architecture (D5.15). This framework is focusing on the analysis of actions and events in the Use Case scenarios aiming to estimate the applicability and the effectiveness of the current health policies referring to the specific case.
This document presents deliverable D6.7 Use Case Scenarios Definition and Design v1 of Work Package 6, Use Cases Adaptation, Integration and Experimentation. The main objective is to provide the Use Case Scenarios definition and specification as well as to present the scenarios in conjunction with the identification of the involved stakeholders that are vital for the deployment of the CrowdHEALTH platform. In this deliverable, we aim to describe the representative use case scenarios for the CrowdHEALTH project.
The present document describes the results of the first development and integration cycle of CrowdHEALTH, as well as the work performed to achieve such results. Achievements are compared to the integration plan reported in D6.1, describing the level of completeness of the developed functionalities and integration of each component, highlighting possible delays or differences with respect to the plan, and reporting the most important issues already solved or still to be solved during this activities, when occurred.
This document is the first in a series of deliverables on data-driven analytical tools for supporting policy makers develop healthcare policies. The focus here is on population-level risk stratification, employing machine learning tools for stratifying segments of the population into different levels of risk (low, medium, high).
The aim of this Deliverable is to define the concept of Public Health Policy (PHP) and present a state-of-the-art on PHPs development, and to propose a first approach to the modelling and evaluation of PHPs that will be used in the Policy Development Toolkit (PDT) to support PHPs evaluation and development for policy-makers.
This document is part of the WP4 Information and Knowledge Acquisition and Management of the CrowdHEALTH project. The purpose of this report is to describe the current status of the Holistic Security and Privacy Framework of CrowdHEALTH, which is crucial for the protection of the CrowdHEALTH’s resources and data. This document presents briefly the regulatory requirements of CrowdHEALTH, and presents the technologies and protocols that will be used to fulfil the relevant requirements.
The Information Aggregation (IA) component enables the aggregation of different information sources to support the creation of Holistic Health Records (HHRs). The IA component handles streaming and batch data coming from various sources in a scalable, efficient and reliable manner to create Holistic Health Records (HHRs). In this respect, the goal of the IA component is to combine a number of disparate data sources into a common format and to store information in a form that makes it easily and readily available for analytics, simulations and decision making.
One of the biggest issues to achieve full semantic interoperability in healthcare in which all the systems seamlessly communicate with each other is still pending. In an ideal environment, the patient would have all the clinical information coming from heterogeneous providers integrated and available in a common format. For the stakeholders, it would avoid missing or duplication of clinical information, reducing the hospitals costs. For patients, it would imply better diagnosis and treatment likewise reducing visits to hospitals thus improving their daily live, anywhere at any time.
The purpose of D3.19 is to document the preliminary efforts undertaken within the context of Task 3.5 Data Cleaning including Sources Reliability Assessment. Towards this end, the scope of the current deliverable is to document the architecture and design of the Data Cleaner & Sources Verifier component and the mechanisms that will be used in order to address the volatility of the information provision, as well as the reliability of the data sources, within the context of CrowdHEALTH.
This project has been partially funded from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement no. 727560
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647639.37/warc/CC-MAIN-20230601074606-20230601104606-00410.warc.gz
|
CC-MAIN-2023-23
| 4,965 | 12 |
https://forum.atomicproject.org/index.php?u=/topic/8/backup-wallet
|
code
|
I have backuped my wallet and it created a file which is called atomic.dat. How do I use this backup file in order to restore my wallet? I tried drag and drop the file on the atomic wallet program but it said" url can not be parsed! this can be caused by an invalid Atomic address or malformed URI parameters" I have the wallet and it's running and it's ok now, but I want to have it as a backup in case my pc has some crash or anything. Also if I copy the entire wallet to another hardrive will it synch and have the same amount of coins if I open it again from there?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806317.75/warc/CC-MAIN-20171121055145-20171121075145-00017.warc.gz
|
CC-MAIN-2017-47
| 569 | 1 |
https://slidetodoc.com/software-reliability-2-alternate-definitions-informally-denotes-a/
|
code
|
- Slides: 18
Software Reliability: 2 Alternate Definitions $ Informally denotes a product’s trustworthiness or dependability. $ Probability of the product working “correctly” over a given period of time.
Software Reliability $ Intuitively: $ $ a software product having a large number of defects is unreliable. It is also clear: $ reliability of a system improves if the number of defects is reduced.
Difficulties in Software Reliability Measurement (1) $ No simple relationship between: $ $ $ observed system reliability and the number of latent software defects. Removing errors from parts of software which are rarely used: $ makes little difference to the perceived reliability.
Difficulty in Software Reliability Measurement (2) $ The perceived reliability depends to a large extent upon: how the product is used, $ In technical terms on its operation profile. $
Software Reliability $ Different users use a software product in different ways. $ defects which show up for one user, $ $ may not show up for another. Reliability of a software product: $ $ clearly observer-dependent cannot be determined absolutely.
Difficulty in Software Reliability Measurement (3) $ Software reliability keeps changing through out the life of the product $ Each time an error is detected and corrected
Hardware vs. Software Reliability $ Hardware failures: $ $ inherently different from software failures. Most hardware failures are due to component wear and tear: $ some component no longer functions as specified.
Hardware vs. Software Reliability $ Software faults are latent: $ system will continue to fail: $ unless changes are made to the software design and code.
Hardware vs. Software Reliability $ When a hardware is repaired: $ $ its reliability is maintained When software is repaired: $ its reliability may increase or decrease.
Reliability Metrics $ A good reliability measure should be observer-independent, $ so that different people can agree on the reliability.
Rate of occurrence of failure (ROCOF): $ ROCOF measures: $ $ frequency of occurrence failures. observe the behavior of a software product in operation: over a specified time interval $ calculate the total number of failures during the interval. $
Mean Time To Failure (MTTF) $ Average time between two successive failures: $ observed over a large number of failures.
Mean Time to Repair (MTTR) $ Once failure occurs: $ $ additional time is lost to fix faults MTTR: $ measures average time it takes to fix faults.
Mean Time Between Failures (MTBF) $ We can combine MTTF and MTTR: $ $ $ to get an availability metric: MTBF=MTTF+MTTR MTBF of 100 hours would indicae $ Once a failure occurs, the next failure is expected after 100 hours of clock time (not running time).
Probability of Failure on Demand (POFOD) $ Unlike other metrics $ $ This metric does not explicitly involve time. Measures the likelihood of the system failing: $ $ when a service request is made. POFOD of 0. 001 means: $ 1 out of 1000 service requests may result in a failure.
Availability $ Measures how likely the system shall be available for use over a period of time: $ $ considers the number of failures occurring during a time interval, also takes into account the repair time (down time) of a system.
Failure Classes $ Transient: $ $ $ Transient failures occur only for certain inputs. Permanent: $ Permanent failures occur for all input values. $ When recoverable failures occur: Recoverable: $ the system recovers with or without operator intervention.
Failure Classes $ Unrecoverable: $ $ the system may have to be restarted. Cosmetic: $ These failures just cause minor irritations, $ $ do not lead to incorrect results. An example of a cosmetic failure: $ mouse button has to be clicked twice instead of once to invoke a GUI function.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302355.97/warc/CC-MAIN-20220120160411-20220120190411-00227.warc.gz
|
CC-MAIN-2022-05
| 3,788 | 19 |
https://goodcleancrazy.wordpress.com/tag/video-voxel-synj-ghrib/
|
code
|
3d Dot Heroes Screenshot
About five years ago I saw Mister Ghrib 4 : Carl’s Memories
by synj, which reminded me of a 3D Super Mario Brothers 3. What if you could take a 3d camera down into the 2d video game world so that you could see each individual cube that made up all the characters and the entire game world? Wouldn’t that look cool? Last year I stumbled onto Metroid Cubed,
a full voxel port of the original NES Metroid game. This was great, but it was still not quite what I had imagined. I wanted the user to be able to zoom the camera in and see the lines between the blocks.
Well, today I did another search and found two great looking games, Fez,
and 3D Game Dot Heroes.
3d Game Dot Heroes is exactly what I had in mind of an old-school 2d game brought to 3d. You can see the individual blocks of every Zelda and Final Fantasy parody! Sadly, this is slated for a November release in Japan only! (Curs-ed Language Barrierrrr!!) The other game, Fez, is a really neat concept, a sort of compressed 3d game world. It’s 2d, except while you are changing viewing angles which is when it briefly becomes 3d, then the world is squished back to 2d. This allows some really nifty spacial puzzles, making the game look like huge fun to play. It’s as tough to explain as Narbacular Drop, so check out the trailer.
Inbetween I experimented with making my own voxel Mario (way bigger pain than it was worth), attempted to program my own game in OpenGL (another big pain), and saw some other videos that reminded me of this cool concept. Fez and 3d Dot Heroes are way better!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00322-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,580 | 8 |
http://rxiv.org/abs/1701.0067
|
code
|
Authors: Simon Maskell
The Dezert–Smarandache theory (DSmT) and transferable belief model (TBM) both address concerns with the Bayesian methodology as spplied to applications involving the fusion of uncertain, imprecise and conflicting information. In this paper, we revisit these concerns regarding the Bayesian methodology in the light of recent developments in the context of the DSmT and TBM. We show that, by exploiting recent advances in the Bayesian research arena, one can devise and analyse Bayesian models that have the same emergent properties as DSmT and TBM. Specifically, we define Bayesian models that articulate uncertainty over the value of probabilities (including multimodal distributions that result from conflicting information) and we use a minimum expected cost criterion to facilitate making decisions that involve hypotheses that are not mutually exclusive. We outline our motivation for using the Bayesian methodology and also show that the DSmT and TBM models are computationally expedient approaches to achieving the same endpoint. Our aim is to provide a conduit between these two communities such that an objective view can be shared by advocates of all the techniques.
Comments: 19 Pages.
[v1] 2017-01-03 05:02:46
Unique-IP document downloads: 9 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463614615.14/warc/CC-MAIN-20170530070611-20170530090611-00342.warc.gz
|
CC-MAIN-2017-22
| 1,544 | 7 |
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5059359
|
code
|
That still wont cut it in this scenario, as the OP says"
You can erase items from a list as your iterating it, you just need to stash the next iterator before you do stuff to the current one.
That would mean that it's possible that the iterator obtained before doStuffWith(it); pointed to the item which got removed, which means that iterator is now invalid.
It could also result in removing other objects from the list
In fact if the current item was not removed but the next item was, then you must only advance the iterator at the end of the iteration, as per normal iteration.
The problem is that in the case being described, there is no way to know whether the current item it being deleted or the next item is being deleted. So you don't know whether to get the advanced iterator at the beginning of or the end of each iteration.
Hence it is not solvable without at least some information about what is deleted coming back to the code in this loop. If you don't know the current item or the next item is always still there at the end of each iteration, then you can't do it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769685.0/warc/CC-MAIN-20141217075249-00090-ip-10-231-17-201.ec2.internal.warc.gz
|
CC-MAIN-2014-52
| 1,080 | 7 |
http://dfw2gug.org/blog/2011/september-2011.html
|
code
|
07 September 2011 - Erik Weibust
The September meeting will be a continuation of our August meeting. We will spend a few minutes in the beginning of the meeting getting newcomers caught up, and refreshing attendees from last month on where we left off.
Grails is a Groovy-based, rapid-application development platform for building web and web-service applications that run on the JVM.
This session will be very "hands-on". Please bring your laptop and plan to "learn while doing." I will start with a high-level intro to Grails, and then we will start building a demo application. By the time we are done you will have built custom domain objects, controllers, gsps, services and tag libraries.
This hands-on workshop will be spread over two months, August and September.
Erik Weibust is a Sr Architect at Credera. Erik is very active in the DFW technology user group scene. Erik helps lead JavaMUG, a DFW Java focused user group. Erik also helps lead the DFW2GUG, a Groovy and Grails focused user group in Dallas. Erik was also the founder of the Spring Dallas UG. A group focused on promoting and educating Dallas Java developers on the Spring Framework.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867092.48/warc/CC-MAIN-20180525121739-20180525141739-00045.warc.gz
|
CC-MAIN-2018-22
| 1,156 | 6 |
https://forum.kingsnake.com/ball/messages/74481.html
|
code
|
mobile - desktop
3 months for $50.00
News & Events:
Posted by MightyPython on May 20, 2003 at 07:04:26:
In Reply to: Snake strikes at me posted by LeeFobes on May 19, 2003 at 20:34:51:
Glad you mentioned that he's in shed because a lot of times when they are about to shed they can get quite cranky. When they are in pre-shed and their eyes cloud over it obviously means that they can't see during this time so you basically have a blind animal until it sheds. So if you try to handle it you'll probably startle it since it can't see that you aren't a threat. It's a good idea to not try and not handle them or feed them during this time for these reasons and since the whole process is a bit stressful to them. You said that it tried to strike you when you type on the keyboard? Do you have your setup right next to your computer or something? If this is the case you may want to move it to a quieter part of the room because all the typing and activity involving you and your computer might make it a bit stressed or cranky too. By the way, you might want to start going to the new forum since everyone is posting over there now. I just pop back here every once in a while just in case. They're eventually supposed to lock this one from posting anyway. Here's the link to the new forum.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00524.warc.gz
|
CC-MAIN-2021-21
| 1,288 | 6 |
http://ubuntunation.blogspot.com/2009/11/howto-rotate-screen-in-linux-using.html?showComment=1294871330055
|
code
|
FedEx Will Pay You $5 to Install Flash on Your Machine - FedEx is making you an offer you can't afford to accept. It's offering to give you $5 (actually, it's a discount on orders over $30) if you'll just install...
43 minutes ago
When you open the normal display options in Ubuntu the choice to rotate your display in landscape or portrait mode is disabled if you are using proprietary NVIDIA drivers.
gksu gedit /etc/X11/xorg.conf
Section "Device"Identifier "Device0"Driver "nvidia"VendorName "NVIDIA Corporation"BoardName "GeForce 8800 GTS"Option "RandRRotation" "true"EndSection
xrandr -o left
xrandr -o normal
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189466.30/warc/CC-MAIN-20170322212949-00119-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 614 | 7 |
https://marealtacharter.com.br/exactly-how-you-can-safely-uninstall-vpn-gateway-70/
|
code
|
Among the most usual troubles computer system individuals encounter is that a program can’& rsquo; t’be eliminated. Today let & rsquo; s see exactly how to correctly uninstall VPN Gate Client Plug-in with SoftEther VPN Customer in Windows,’and I & rsquo; ll additionally detail the possible factors that you can’& rsquo; t finish the elimination
If you are incapable to entirely uninstall VPN Gateway Client Plug-in with SoftEther VPN Customer, the cause might several of the complying with situation:
More Here softether vpn gate At our site
To leave out the above reasons, you can try to reinstall VPN Gateway Customer Plug-in with SoftEther VPN Customer by downloading it from the official, or execute a full check with your anti-virus product.
In this part I have simplified the description of the needed actions, and after that clarify them in detail. Ideally this is the very best method for you to discover the whole cleansing procedure.
If you’& rsquo; re utilizing the administrator account or an account that has the administrative legal rights, after that you can leap to the following action. Or else you’& rsquo; ll require a permission password when making changes in the system > > To make sure the elimination goes smoothly, check the System Tray in the bottom-right edge to leave the program.
To do this, right-click on the Start button, as well as pick Programs and also Features > > Double-click VPN Entrance Client Plug-in with SoftEther VPN Client in the checklist to trigger the integrated uninstaller > > Validate the elimination > > Reboot the computer system right now or do it later.
. To do this, execute “& ldquo; regedit & rdquo; in the search input area to open the Windows registry Editor > > Browse to this folder: HKEY_CURRENT_USER \ Software \(VPN Entrance Customer Plug-in with SoftEther VPN Customer or the author’& rsquo; s name )\, as well as erase it if discovered > Navigate to this folder: HKEY_LOCAL+DEVICE \ SOFTWARE \(VPN Gate Customer Plug-in with SoftEther VPN Customer or the publisher’& rsquo; s name )\, and also erase it if found > > Look & ldquo; VPN Gate Client Plug-in with SoftEther VPN Client (or the author'’ s name)” & rdquo; to inspect if there & rsquo; s any other leftovers > Reboot the computer system.
Don’& rsquo; t bother to execute the regular actions? Then this would be your ideal selection – utilizing Max Uninstaller, which takes care of all the required work for you, to securely, completely uninstall VPN Gate Customer Plug-in with SoftEther VPN Client. It’& rsquo; s like breaking open a method via brambles and thorns, there’& rsquo; s no requirement to worry about the problems that may show up in the middle of the removal.
I’& rsquo; ll explain every step for you, to make sure that the following time you wish to eliminate a program by using it, you can do it faster:
The installment will end up in one min. After that run the application, it will immediately check all the presently mounted programs and also reveal you in a listing.
Select VPN Entrance Customer Plug-in with SoftEther VPN Customer in the list, as well as click Run Analysis on the right. It will locate all the related documents of the target program and presents them in a list with information. Simply maintain the things examined, and click Total Uninstall.
When the tail end is done, you will see an environment-friendly Check Remaining button, click it to learn all the continuing to be data that may conceal in various folders. Likewise maintain all the items inspected, as well as click Delete Leftovers to Entirely uninstall VPN Entrance Client Plug-in with SoftEther VPN Client.
When it claims “& ldquo; VPN Gateway Client Plug-in with SoftEther VPN Client has actually been entirely removed,” & rdquo; click & ldquo; Back to Step1 & rdquo; to freshen the programs list. VPN Gate Customer Plug-in with SoftEther VPN Client need to no more be there, as well as you can attempt Max Uninstaller on any other program you desire to get rid of. Are you offered?
Besides uninstalling unnecessary programs in the system, there are several other means to maximize your computer system’& rsquo; s efficiency. As an example:
These should be the easiest to reach and also understand. Surely you can get even more various other maintenance ideas on the net, and they are all cost-free. I do hope this web page has actually supplied the most sensible details you’& rsquo; re trying to find.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141196324.38/warc/CC-MAIN-20201129034021-20201129064021-00415.warc.gz
|
CC-MAIN-2020-50
| 4,471 | 16 |
https://research.qut.edu.au/centre-for-justice/our-people/melinda-laundon/
|
code
|
PhD (Queensland University of Technology), Master of Management (Australian National University), Master of Public Policy (Australian National University), Bachelor of Arts (Hons) (Queensland University of Technology), Bachelor of Arts (Queensland University of Technology)
My research interests include the higher education policy environment, and in the management and organisation studies field, performance, recognition, learning and development. My work is often based on interdisciplinary approaches combining public policy, employment relations and HRM theory. Recent and current research projects include evaluation of learning and teaching, teaching philosophies of university educators, and reward and recognition in the Australian finance sector. My career prior to academia was in the Australian Public Service, most recently as the Australian Research Council's Assistant Director, Research Performance and Analysis.
Policy analysis and professional roles include developing and advising on research impact case studies for the national Engagement and Impact Assessment 2018 exercise, and policy, guidelines and stakeholder consultation on the Australian Government's Excellence in Research for Australia 2010, 2012 and 2015 evaluations. I've conducted commercial research and consultancy for large finance companies, government agencies and non-profit organisations.
- Laundon M, Cathcart A, McDonald P, (2019) Just benefits? Employee benefits and organisational justice, Employee Relations, 41 (4), pp. 708-723.
- Laundon M, Williams P, (2018) Flexible work: Barrier to benefits?, Financial Planning Research Journal, 4 (2), pp. 51-68.
- Laundon M, McDonald P, Cathcart A, (2019) Fairness in the workplace: organizational justice and the employment relationship. In T Dundon, Elgar introduction to theories of human resources and employment relations (Elgar Introductions to Management and Organization Theory), Edward Elgar Publishing, pp. 295-310.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510138.6/warc/CC-MAIN-20220516140911-20220516170911-00140.warc.gz
|
CC-MAIN-2022-21
| 1,964 | 6 |
http://www.mcmelectronics.com/product/28-12135&cid=prodCrossSell
|
code
|
102-155 - Jumper Wire Kit
Jumper Wire KitContains 350 lengths of pre-stripped, pre-formed #22 solid wire in various colors. 14 different lengths of 25 pieces each. Contained in a large 14 compartment plastic case.
I have dealt with MCM Electronics a number of times and have been more than satisfied.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00101-ip-10-164-35-72.ec2.internal.warc.gz
|
CC-MAIN-2016-26
| 300 | 3 |
https://www.argyllscott.sg/job/quantitative-researcher
|
code
|
Our client, a well-established US Quantitative Company, is seeking a Quantitative Researcher for their company. They are prioritizing growth in the Vietnam business and expanding their business, so this is an incredibly exciting time to join the company.
Job Location: Fulltime - Ho Chi Minh City, Vietnam.
- Create and develop Alphas and other utilization algorithms on the company's product.
- Conduct research on academic quantitative finance literature.
- Identify and design new research domains. Generate ideas to grow the field.
- Analyze current functionalities available to researchers, identify any issues with the platform, and provide solutions and recommendations to the team.
- Design and test new functionalities and datasets on the platform.
Business Development and Consultant Engagement:
- Work with local Territory Managers and other Business Development stakeholders to enhance & execute company business strategy for user and consultant acquisition.
- Conduct training sessions for users and consultants.
- Prepare and update the training curriculum of the company.
- Familiarity and competence in using company's platform, ex-VRC Research Consultant, or Research Consultant preferred.
- Possess or expect a Bachelor's degree or advanced degree in engineering, science, mathematics, finance or any other related field that is highly analytical and quantitative from a leading university.
- Demonstrated programming experience in one of the following (Java/C++/C/Python/MySQL/SQL Server); knowledge of UNIX preferred.
- Possess a research scientist mind-set; be a self-starter, a creative and persevering deep thinker who is motivated by unsolved challenges.
- Have a strong interest in learning about worldwide financial markets.
- Possess good communication and presentation skills in English.
This is an excellent opportunity for ambitious talents to accelerate their career in a dynamic team. If you are interested, please drop me a line via [email protected] or +84 938 790 578 for more information.
Argyll Scott Asia is acting as an Employment Agency in relation to this vacancy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473360.9/warc/CC-MAIN-20240221002544-20240221032544-00046.warc.gz
|
CC-MAIN-2024-10
| 2,118 | 19 |
https://jobs.jobvite.com/roundtower/job/oriIbfwm
|
code
|
Senior Architect, Cloud / DevOps - AWS
Sr. Cloud and DevOps Architect -AWS
RoundTower is a premier systems integrator that provides innovative solutions and services in the areas of data center infrastructure, converged platforms, mobility, cloud automation and orchestration, DevOps, and data analytics. RoundTower is helping enable its customers to drive positive business outcomes by becoming more agile and efficient through the use of technology.
We are currently looking for a hands-on experienced Sr. Cloud and DevOps Architect within our Cloud & DevOps Practice. This role provides support for RoundTower clients during Professional Services engagements. Primary responsibilities will be supporting post-sales activities to execute Professional Services offerings, along with architecting and building automated tools to increase efficiency in those offerings. The ideal candidate has experience helping organizations design and implement solutions that meet their needs and is looking to grow both technically and organizationally.
Capabilities and Responsibilities
- Architecting and developing customer applications to be AWS cloud optimized.
- Working as a technical leader along side customer business, development, and infrastructure teams.
- Working as both an infrastructure and application development specialist.
- Advising and implementing AWS cloud best practices.
- Proactively seek new value-add opportunities for customers and converts those new opportunities to realized value.
- Ability to perform implementation, support, and / or migration to DevOps toolsets (e.g. Automation, Configuration Management, Containers, CI/CD, Source Control).
- Development of custom automation based on client needs.
- Ability to analyze and interpret performance data in order to provide assessments and presentation solutions derived from empirical data and customer business requirements.
- Manage multiple engagements simultaneously in conjunction with other Practice Engineers while ensuring proper collaboration and hand-off as needed.
- Apply and create best practices in multiple technical domains using AWS and third party technology products.
- Guide customers appropriately following the AWS Well-Architected Framework.
- Ability to travel to customer sites up to 40% within the eastern US time zones.
- Highly technical and analytical, possessing 9 or more years of IT implementation experience, with at least 5 of those years hands on implementing AWS services
- 6+ years of hands on programing skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
- 5+ years creating reference architectures, implementation and system design, and C-level technical reports and presentations
- Excellent written and verbal communication skills, interpersonal and collaborative skills, and the ability to communicate security and risk-related concepts to technical and nontechnical audiences.
- Exhibit excellent problem solving and analytical skills, the ability to manage multiple projects under strict timelines, as well as the ability to work well in a demanding, dynamic environment and meet overall objectives.
- Proven experience with AWS Native automation solutions, including services such as AWS CloudFormation, AWS CodeBuild, AWS CodePipeline, AWS CodeDeploy, AWS EC2 Systems Manager, and AWS CodeStar.
- Curious mindset with the ability to learn and adapt to new technologies
- BS level technical degree or equivalent experience; Computer Science or Engineering background preferred; Masters Degree desired.
- 5+ years in Application design and refactoring into SaaS or microservices
- 5+ years proven experience with DevOps solutions, including tools such as Kubernetes, RedHat OpenShift, Rancher, Jenkins, GitHub, Terraform, AppDynamics, and others.
- Integration of AWS cloud services with on-premise technologies from Microsoft, IBM, Oracle, HP, SAP etc.
- Agile software development expert
- Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
- One or more certifications (AWS Professional Certifications, Kubernetes CKA)
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371662966.69/warc/CC-MAIN-20200406231617-20200407022117-00400.warc.gz
|
CC-MAIN-2020-16
| 4,085 | 31 |
https://www.elance.com/s/solace1024/job-history/10183/?t=1
|
code
|
Tonight I need someone who knows how to fix errors on a nginx php5fpm apc wordpress/mu /buddypress system on AWS.
I know a fair amount and I have had other admins look at it, so this is someone who knows their chops.
We keep getting upstream connection errors on scripts in the error.log ect.
You will be working directly with me on skype.
Skills: wordpress, buddypress, nginx, php5-fpm, ubuntu admin
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011062835/warc/CC-MAIN-20140305091742-00026-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 400 | 5 |
http://docs.aptlab.net/geni-lib/tutorials/wanvts.html
|
code
|
VTS: Basic WAN Topology¶
This example walks through creating a two-site WAN topology with one forwarding element at each site. Like all VTS reservations that require compute resources, the resources for each site will come from two different aggregate managers. This example also employs further sequencing constraints in order to build the WAN circuit.
In order to build a circuit between two sites those sites need to share a
common circuit plane. This is simply a named substrate that both sides have
a common attachment to. In this tutorial we will use the
circuit plane, which is currently available at most GENI VTS sites and replaces the
geni-mesoscale circuit plane that is available at some sites but is being phased
This example requires that you have set up a valid context object with GENI credentials.
For this example we’ll use InstaGENI compute resources, but this would work for ExoGENI sites that have VTS support as well if you change the InstaGENI imports to the relevant ones for ExoGENI.
Set up VTS Slivers¶
We will first set up VTS slivers at both sites, before creating the local compute resources. This is not a strict requirement - you must always set up the VTS sliver at a site before the compute sliver, but you can request the compute sliver at a site before requesting the next site VTS sliver if that better fits your workflow.
In this example we will save the VTS manifests for later use to get compute resources, in case your interactive Python session needs to be restarted.
We need to set up basic imports to create requests and send them to the aggregate:
import geni.rspec.pg as PG import geni.rspec.igext as IGX import geni.rspec.vts as VTS import geni.aggregate.instageni as IGAM import geni.aggregate.vts as VTSAM
Here we also set up the slice name you’re going to use, as well as the context object that specifies your credential information. If you set up your
geni-libusing the GENI Portal Import method, the code below will directly work. If you built a custom context using your own Python code you will need to replace the code below to load your custom context:
import geni.util context = geni.util.loadContext() SLICENAME = "my-slice-name" # Change this to be your slice name
If you do not have a slice available in your project, you may need to go back
to the GENI Portal web interface and create a new slice. Also if you have multiple
projects you may need to modify which one is being used by setting the
VTS reservations are typically a multistage process, where the VTS resources at a site must be reserved before the compute resources, or neighbour site VTS resources, and the results from the earlier reservations will be used to seed data in all subsequent reservations. In the case of WAN reservations we will need advertisement information from the remote VTS site we intend to connect our circuits to:
remote_ad = VTSAM.NPS.listresources(context)
We need to search this remote advertisement for information that describes the endpoint we want to use for our chosen circuit plane:
for cp in remote_ad.circuit_planes: if cp.label == "geni-al2s": remote_endpoint = cp.endpoint
We now start to build our primary site VTS request rspec:
s1r = VTS.Request()
As in previous tutorials we will select a default L2 learning image for our forwarding elements:
image = VTS.OVSL2Image()
We the instantiate a single forwarding element with this image, and request a local circuit to connect to our VM, as well as a WAN circuit to connect to the remote site:
felement = VTS.Datapath(image, "fe0") felement.attachPort(VTS.LocalCircuit()) wan_port = felement.attachPort(VTS.GRECircuit("geni-al2s", remote_endpoint)) s1r.addResource(felement)
We have chosen to use a GRE Circuit here to reach the remote site, although other types might be available. Each site advertises a list of supported encapsulation types for each circuit plane, allowing you to choose the one that best suits your needs based on performance and packet overhead.
Now our request object is complete for our first site, so we can contact the aggregate manager and make the reservation:
ukym = VTSAM.UKYPKS2.createsliver(context, SLICENAME, s1r)
If you are at an in-person tutorial you may need to replace
with the aggregate you have been given on your tutorial worksheet.
We will write out our returned manifest to disk in case we need to restart our Python session:
Now we will start building the VTS request at the remote site:
s2r = VTS.Request()
The basic parts of the request are the same at each site:
felement = VTS.Datapath(image, "fe0") felement.attachPort(VTS.LocalCircuit()) s2r.addResource(felement)
Now we need to attach one port to our forwarding element that connects to the remote site that we have already configured:
This searches our previous manifest for the WAN port we have already defined, and gathers the endpoint information to put in a remote request. The combination of this inforamtion will create a complete WAN circuit.
Having created our request, we send it to the aggregate manager to reserve our resources, and write the output to a file:
npsm = VTSAM.NPS.createsliver(context, SLICENAME, s2r) npsm.writeXML("vts-nps-manifest.xml")
Set up InstaGENI Compute Slivers¶
As we have two sites, we will need to set up our compute slivers at both sites, using the manifests returned from each VTS request. We want to set up IP addresses that we will use on both sides of our WAN topology:
IP = "10.50.1.%d" NETMASK = "255.255.255.0"
Each request is relatively simple, containing only a single VM connected to a single VTS port, pulled from the site VTS manifest:
ukyr = PG.Request() for idx,circuit in enumerate(ukym.local_circuits): vm = IGX.XenVM("vm%d" % (idx)) intf = vm.addInterface("if0") intf.addAddress(PG.IPv4Address(IP % (1), NETMASK)) ukyr.addResource(vm) lnk = PG.Link() lnk.addInterface(intf) lnk.connectSharedVlan(circuit) ukyr.addResource(lnk)
The code above is the same as in earlier tutorials, which you can refer to for more thorough explanation.
Now we make the reservation:
ukyigm = IGAM.UKYPKS2.createsliver(context, SLICENAME, ukyr) geni.util.printlogininfo(manifest=ukyigm)
We execute nearly identical code for the second site (note the IP address change):
npsr = PG.Request() for idx,circuit in enumerate(npsm.local_circuits): vm = IGX.XenVM("vm%d" % (idx)) intf = vm.addInterface("if0") intf.addAddress(PG.IPv4Address(IP % (2), NETMASK)) npsr.addResource(vm) lnk = PG.Link() lnk.addInterface(intf) lnk.connectSharedVlan(circuit) npsr.addResource(lnk)
Now we make the second site reservation:
npsigm = IGAM.NPS.createsliver(context, SLICENAME, npsr) geni.util.printlogininfo(manifest=npsigm)
In a few minutes you should be able to log into your VMs with the info printed out by the above step and send test traffic (ping, etc.) between the VMs across your VTS WAN topology.
Once you are done using your topology and exploring the tutorial, please delete all the resources you have reserved:
IGAM.NPS.deletesliver(context, SLICENAME) IGAM.UKYPKS2.deletesliver(context, SLICENAME) VTSAM.NPS.deletesliver(context, SLICENAME) VTSAM.UKYPKS2.deletesliver(context, SLICENAME)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00294.warc.gz
|
CC-MAIN-2022-40
| 7,136 | 59 |
https://otter.ai/s/iSPyJJt5QUGQH99JqUq0bA
|
code
|
CUNY2020 virtualWorkshop 3.23.2020
1:54PM Mar 24, 2020
Shohini & William
Noah & William
Noah, Patty, William
Patricia (Patty) Reeder
Patricia & William
Patricia & Noah
Brielle Stark & Brian MacWhinney
Brian MacWhinney & William Matchin
Okay, hello, everyone. It's about 10 o'clock. So I think we should get started here. So just to start, my name is William Matchin.
This workshop started after CUNY 2020 at UMass Amherst had to be unfortunately, cancelled due to the corona virus pandemic that was occurring, but they very fortunately for all of us, I think many of you attended CUNY. They transitioned the conference to a virtual conference using zoom. And it was actually I think, fantastic. And just thanks to all the organizers for getting that going in such a short time, and it was really surprisingly good. And one of the things that was good about it was that it allowed Junior scientists who benefit greatly from having these presentations to put on their CV and network released, allow them to get some of that. Some of that opportunity But one of the things that came up during the conference just in talking informally to attendees was a concern about data collection. And the opportunities for data collection that of course, are extremely problematic are mostly halted right now due to the academic and social distancing that everyone is doing. And so there was a lot of interest in doing here. Sorry, my family's being distracted here. There's a lot of interest in getting remote or online data collection going. And I think it's actually quite fortunate that we're at this part of our careers right now, because there's actually a lot of tools available that people have been using for some time in linguistics and psycholinguistics. So I think that this actually is a good opportunity for us to learn about them and become familiar with it. And basically, you know, start using those tools as much as we can to get that going. So at any rate, so for today This is the schedule that I was able to cobble together again, just to let you all know this. Basically it was cobbled together in the last two days. So, you know, it's I'm extremely grateful to everyone who reached out to me to who offered to help or to present. And also, thank you so much to you guys for showing up and basically helping to make this worthwhile workshop. So basically we'll be doing is we'll be starting starting the presentation with a hope that Chahine about to solly is here. Chahine, if you're here, can you please chat me just to make sure that we're on schedule for today. And so Chahine is a researcher at the University of Maryland. And she has been doing this interesting work in neuro imaging. And one of the things that is very difficult right now. Now for all of us that do neuroimaging. So, you know, I am a cognitive neuroscientist, I study language in the brain and use typically fMRI lesions symptomatic and aphasia. And right now, of course, even with the remote tools that we're going to be discussing, it's going to be, you know, essentially still impossible to collect neuroimaging data. So one thing that we wanted to make sure to add was the opportunity for the basically to access publicly available datasets. And fortunately, again, like I said, there's a lot of this is becoming more and more available. And so she's been doing some pretty interesting work, and they've posted their datasets online. So her presentation will focus on discussing those data sets. Basically, what they are, they're both in fMRI and eg and how to access them and understand the data. And then following that, we're going to have a nice long, almost two hour workshop by Noah Nelson and his colleagues. at finding five. So again, you know, I do neuroimaging, I'm not really familiar with any of these tools. So it's actually one of the great opportunities of organizing workshops as you can organize it by not knowing anything yourself about what you're organizing, but getting other people that do know what they're doing to come and teach you. So Noah has been working on finding five, which is really a great platform for online linguistic and psycholinguistic experiments. And they've got a lot of nice features. And they focused a lot of making this tool accessible and easy for those of us that are not as savvy with programming or familiar with the tools to get them up and running quickly. and integrate that with Amazon Mechanical Turk, which I've shared. No, we'll talk about in detail.
So that's gonna be a great workshop. It's going to be almost two hours and it's going to be hands on so you'll have an opportunity to access their system and try to get your own experiment up and running. And so while that's happening, if you need help, Feel free to use this q&a document, I posted the beginning of a chat links to three documents online that you can edit. One is a Google document for q&a. So basically, if you have any kind of just question, I'm going to keep you all muted. And then what I'm going to do is basically go to the question document and have some wonderful volunteers that that offered to help. That's Nick Wang at UConn when rhetoric at UC Davis. And I also got some help from Lena, Carlos skya. And Jennifer Arnold, who offered to sort of monitor the q&a document and basically what we'll do is we'll look at the q&a document we see questions when the appropriate time comes. We'll basically unmute you so that you can ask your question and get that response. If you want to have a like a more private video chat and that might be particularly helpful for the finding five tutorial. You can go to that Excel spreadsheet. It's this open zoom meeting spreadsheet In that first tab, a Monday virtual workshop, basically, you can just put your name and put a link to a zoom meeting or something like Skype. Oh, yeah. Okay, so someone just asked if I can post the documents again, I'm going to send those in the chat right now.
So let's see.
And please bear with me because I'm still getting used to zoom myself. By the way, zoom is really fantastic for meetings like Skype and using Skype for all this time. And, you know, they're really, Skype is awful. The Zoom is great. So it's really a great resource, any rate, so this Excel spreadsheet assuming signups just good put your name, put a link to a zoom meeting or a Skype meeting or any other kind of
Maximum participants, does anyone know how to add more participants? I didn't realize that there was some kind of limit to the number of people that can join.
If anyone knows how to add more than 100 could you add to that chat? If you well, so if you know people that are trying to get in Oh 6000 or whatever, I just noticed that on the Google Sheets, someone don't have to create. Okay, wow, that's crazy. Okay, so apparently, apparently I have to pay more money to get more than 100 people. So consider yourselves the lucky ones who are able to log in. And I think I've got our presenters all logged in for this first session. So that's good. But this will all be recorded and posted later. Okay. So if you weren't able to make it, sorry to people that are listening in the future, but we had you in mind at this moment in our lives. Okay, so the so like I said, we'll do the finding five workshop you'll be able to log into their system and get an experiment up and running. Like I said, if you want some one on one help, feel free to post a link to your chat. So that way we can get someone to come and talk to you and give you any help that you might need. And then there's one more link to resources. That's the third document that I sent out. Basically, this is just a link to all the materials that we discussed. So there's just a list of things that you can find online. And if you have anything to add, please feel free to add to that and edit that document. That's not a problem at all. I just wanted this to be a resource that's available to all of you that you can just, you know, communicate with each other and share any resources that you might have or experience that you're willing to share. So then we're gonna have lunch around 1215 to 45. This is mostly just to give you guys a break, kind of tune out for a little bit. And then at 1245 when it come back with a short presentation by Florian Schwartz, and he's going to talk about PC ibex, which is a sort of alternative to finding five. It should be it's very similar from what I've been told in terms of how to use it, how to the programming language that's involved. But I think it offers a lot of other customizable features. And particularly he mentioned there's a kind of beta version of online eye tracking using webcams. And I was particularly interested in that, because that's hasn't been developed as much as other methods. So that could be interesting. And then we'll have some discussion by Bruce Stark, and Joshua, I forgot his last name. But basically, there'll be talking about some online databases for neuroimaging data, as well as data from a typical populations, particularly aphasia, traumatic brain injury and dementia. So for those of us that do work in those areas, there's actually these great public, publicly available sort of semi publicly available datasets, if you do work in aphasia. So, let's see. Yeah, and then following that, we're just gonna have a short panel discussion, and the idea with that panel discussion, let's see if I can go to my presentation here. Yeah, the idea of the panel was simply to point out that a lot of things that might be very useful for us right now in the short term is getting getting data from another person that you may know or not know. But there's maybe there's data that's published, but they haven't publicly released it. It's still leaves, a lot of researchers are perfectly happy to share that data with you. So we just wanted to discuss a little bit but how to navigate that. And so restart has also done that a few times already. And so she's going to be discussing some of that and if I've got some other people too, that can jump in here and discuss their experiences with just getting data from people. That could be could be helpful. Okay. With that said, I'd like to turn it over to Chahine. And so what I'm going to do is I am going to in the show, and I am going to Okay, great. We've already got that going. I think I need to unmute you
Are you unmuted Chahine?
No you're not. Okay. I will unmute you. Okay. You should be unmuted.
Yep. Hi, can everyone see my screen? Yeah, if you can't see this, I can see your screen fine. So I assume that that will work for everyone. Okay, so, um, yeah. So today I'm going to be talking about the Alice data sets.
So, these are, we just want to point out, these are two kind of newly released data sets that you know, you can use if you're right now working from home or unable to collect any kind of new data for your experiments. So these are two parallel nationalistic eg and fMRI data sets, and it's based on chapter one of Alice's Adventures in Wonderland. So as you can see in the figure, so they all the participants heard the same chapter so we have the time point of every word in the story. And depending on the modality fMRI or EEG, we measure both signals or the electrophysiological signal. And overall, in this chapter, there were 2129 words, and 84 sentences, which are on an average 20, you know, 25 ish words long. And overall, the stimuli is about 12 and a half minutes. So, in terms of detail how you can access them, so they're both publicly available now. One's shed on the machine website. So the link is posted here. And the other one is shared on the open your repository, and both are shared under creative commons license. So you know, you can use it as long as you attribute us and if you want more details about the pay plan, so I'm going to give a gonna go over a bit of it, but if you want a few more details, You can check out our forthcoming paper which is like alright, data paper kind of going really into details about data collection, and how the scanning protocols and everything if you're interested. So in terms of data collection of For eg we had 49 participants, and there were different analyses done and like the paper kind of describes if in certain cases certain participants were excluded in case there was too much movement, or they had like a really low score on a quiz. But they were recorded a 500 hertz from 61 active electrodes. And this data collection was done with john brennan and his students at the University of Michigan. And for the fMRI, there were we had 26 participants, and it was a three Tesla scanner with a 32 channel headquarters at Cornell University. And this was collected I in john Hale and some other students. We collected this data set while I was at Cornell, and the study design was the same so the participants came in and you know, There was no explicit task, they just had to listen to the entirety, like have that audiobook. And once they heard that audio section, they all completed a multiple choice questionnaire at the end and the oldest fairly well, some of the eg participants didn't do as well. And that's also noted in the paper. So you can kind of take a look at, if you think that, you know, some people weren't paying attention. So that also kind of given that paper. And so along with actual data set that we released, we also released the timestamp of every word in the story and also some of the predictors. So we already have some published analyses based on these data sets. So last year, john brennan and john Hale had eg paper in class one about hierarchical structure guides, rapid linguistic prediction during naturalistic listening. So for this one, they basically found that it's hierarchical structure rather than sequential information, and all the predictors they use for this. It has been really But the data said to encase interest in replicating it or, you know, trying to kind of extend that analysis further. Then another example of an existing analysis would be a brand language paper from 2016. That, you know, we had john read and john Hill and some other people on and there they were trying to kind of look at how linguistically rich grammars work, you know, were found to be correlated in temporal but not frontal regions. And all the predictors that we use in that analysis has also been released. So it's also available. And then one of our other colleagues recently did a study using fMRI data said kind of do test to predict to based on the distributional hypothesis confirming a meaning related role in the anterior temporal lobe. So, just to give you a sense of kind of the space of research questions has been asked, but there are definitely lots of kind of further questions you can ask and you know, new hypothesis that you could test because this data set is not toss base, so Anything that you're interested in, you know, you can go and test your new predictor based on that, or a lot of the prior analyses have been done in SPM in MATLAB, if you're interested in trying it out in a Python environment, so I know for eg there's, I mean, there's like eel green ne, and for fMRI, there's nylon. So if you want to try it in, like different environment replicates analysis, you know, that's also another possibility. So yeah.
If anyone has any questions
about the data set, or like,
if there's anything that's come up in the chat, I'd be happy to answer those.
Okay, so thank you. Chahine. That's great. Um, if people have questions, like I said, Please try to use the Google Doc.
So again, I can send out that link here because I think some people have joined now so I'm going to just copy that link and put it into the chat.
And Sue question. So this is both fMRI and eg retro Heaney. Yeah, it's both and it's so there were different participants, but they heard like the exact same stimuli at the same rate. So,
So for people that were getting started on, they just wanted to do like a first pass at this. So this is all naturalistic comprehension. Right. So these are like, it's a naturalistic kind of story. And this is very different, I think, from what many of us are used to doing, which is a controlled experiment. So can you just give some thoughts on like, how would you start approaching, you know, doing that, like, Is it is it necessary to use the same kind of computational, you know, parsing approach that you used?
No, I mean, you could at least I would like the simplest thing would be so the tat for example, like the timestamp of every word is given. So if you want to just do something like noun versus verbs, right, you could just annotate them, you could have a binary like 01 sort of predictor. And just in a same for example, an fMRI thing if you just did a typical like GLM thing, and you wanted to kind of compare the effect of noun versus verbs, you could just annotate all the nouns with like a one and verbs with the one to do like a contrast like that, which would be comparable to kind of more of the kind of classic kind of control based designs. We did more like gradient predictors based on different kind of NLP stuff. But you can kind of start with like a very simple predictor and see you know, how that works out. And then you can maybe extend that to like further kind of part, like parsing metrics or like anything else that you're interested in.
Okay, I'm gonna take a look at the google doc and so one person is asking.
Let's see are there metadata on Participants linguistic background, are they monolingual? No a second language, etc.
So I think I'm done. I'm pretty sure to the eg One, two, but for the fMRI one, we made an order when participants were bilingual, but are like multilingual. But we didn't make a note of the exact
languages this book. We, as long as they were self reported native English speakers, they were included in the study. So we ended up we had some like Korean English, Hindi English, like bilingual speakers. So we just kind of made a note that they, you know, they spoke another language.
Okay, so they're all native English speakers, possibly are bilingual. And that there is it noted that they're bilingual in the data set
in the metadata. So yeah, if you go to the repository, there is a readme file with the metadata for describing all the participants, including I think even so we did a little audio check to make sure like they could understand the story properly. And seeing the decibel at which the story was stated was kind of similar across all policies, but I decibel level is also noted for all the participants. We didn't use that value in our study or that quiz comprehension results, but they how well they did on the quiz, what kind of volume the story was played. And those are all noted in the metadata.
Okay, that's fantastic. Okay, so that behavioral data could all be used. Yeah.
So that's great. One person asked a question is the audio that participants listened to you also available? Like the stimulus? Yes. So we included with the data set, but we just, we also send it's a link from it's what is it called library box, which has like tons of audio book. It's actually free downloads. If you go to that website. You can also download it directly from that website. But we also include the link in our repository
and audiobook for free.
It's great. If anyone has more questions, if you're just tuning in, please go to the Google Doc that I just sent a link to.
You can add your questions there. So you, so maybe you can just talk for a minute about what you've done already, like some of the results that you guys have done with this data said.
Sure. So I gave a couple of examples here. Um, so I'm trying to think of some other stuff that people have done. Um, so you presented actually a CUNY right it was your poster and so
that so the second paper you've listed there, this the fMRI paper and you've used that same data set, and then what you did is you edit regressors based on properties of the verbs based on selectional restrictions, right?
Well, William, that's actually a different data set so I collected the essence was my dissertation so that's based on little prints English version so that's not publicly available Yes. That he was involved in before this Alice in Wonderland one. That one you're hoping for these in the future because that's actually across linguistic work because I we collected the English data set Who's listed on the screen, she collected the Mandarin version of The Little Prince. So she has f my data for that. And currently in France, there's a group collecting the French data for the Little Prince. So the goal was do a cross linguistic fMRI study. We haven't kind of finished scanning. So we haven't released that data set down the road that should roll, that's the goal, down the road, kind of release it. And that that'd be like kind of cross linguistic parallel fMRI data set. That's not out there yet. But yet, my recent work has been based on similar so that was actually longer because that's one and a half hours. That's a longer data set, a similar naturalistic stimuli and all of my recent work on argument structure, looking at non compositional expressions kind of has used that data set. And right now I'm looking at things like tropical surprisal and how to incorporate kind of contextual information. But yeah, like, that's the thing. I mean, I've been using that same data set now for like, I think over two years once all sorts of questions. So this kind of naturalistic stimulus definitely kind of lends itself To them, that kind of flexibility.
That's great. Okay, we got a couple more questions here. So one person is asking is the code used for the NLP models also available if they want to implement the analysis in a different pipeline.
So the predictors that are available, the way we kind of derived the predictors like that kind of code isn't available. But if you're going to reach out to me, or john, we can probably help you out with that. But for example, for the brain and language people, you know, they calculated like by Grand trigrams, appraisal, and all of those values for each words, it's already available. So if you just want to replicate it with a different pipeline, you can just run it. We have it all in a spreadsheet. So that's available. If you want to specifically how we did it, you can, it's described in the paper, but you can also reach out to me or john, we'd be happy to kind of help out answer any questions.
Okay, that's great. And then there's another question here, which is, as, as far as I remember, the audio was slowed down. So it may not matter.
What was in the audio book from LibriVox? Is that correct?
Yes, it was slow down by 20%. I think we had the slow down version included when we just slowed it down and like prod. So
it's, we didn't do anything like super fancy with slowing it down. So it was just on a prod. So you could download and do it yourself or the slow down version isn't loaded.
Okay, that's fantastic.
So just want to let everyone know, it's great questions, if you wouldn't mind adding your name, that's helpful as well if you feel comfortable with that, because when there's this is sort of a rush session here. But if there was more time, we can actually you know, click on you unmute you so you can ask your question yourself and any kind of follow ups or clarifications.
If anyone else has questions, please post them right now on the Google Doc. If not, then we're going to we're going to transition to the finding five tutorial.
Yeah, and people can definitely reach out to me. Um, you can also this whole the data paper that I
I mentioned it in the schedule that William sent out the link to it. And also it's on my website. So if you want to read the data paper, it has links to all the data sets. And like it describes all the data sets, too. So and also, you can email me just [email protected]. And I leave my contact information in the Google Doc, if people have questions after this. Okay, that's fantastic. Thank you so much for doing this at the last minute. That's really helpful. And just want to let everyone know that in the afternoon, we're gonna have a presentation by Joshua and he is going to talk about just generally different databases that are online publicly available, neuroimaging databases and his experiences accessing and using those. I'm sure that one of the biggest issues with these publicly available datasets is trying to understand what people did exactly how they did it and whether how to find things are actually of interest to you that are usable for the questions you want to answer. So they'll be covered in the afternoon session. Okay, so now I would like to transition
to Noah Nelson. Again, he just I believe got his PhD not long ago from University of Arizona, and he is one of the representative of the finding five platform. Okay, so I'm going to go ahead and turn it over to
Noah. Let's see if I can find you on the list. Okay. We're gonna try to unmute you. Okay, you are now unmuted.
Hi, Can you all hear me? Yes, I can hear you great. Your audio quality is good Um, I don't see your video. Yeah. Is that something might be let me see here. I think I can quit There we go. Yes working now while I'm in control of that. I can guarantee you guys that know his beard is far bigger than the picture on the finding five websites so you should congratulate him on the accomplishment.
This is this is just during coronavirus. I read this is just the quarantine beard okay. But anyways, that Thank you.
So please let me know. If you need anything please let me know and we can you know, get resources out to people if they need to, you know, access anything online.
Okay, great. Yeah, thank you. Um, before I dig in, I just want to say thank you to William and everyone else who set this up this is a really great thing that must have taken a decent amount of effort to put together so thanks for that. I am going to try to share my screen here let's see if I can
make the other should be
better the bottom in the middle like a share button. Yeah, it if you
click the arrow or sorry if you click it, there's you can just share like a computer screen or just a program so I was able to just share like PowerPoint.
Yeah, sorry, I have to do.
term Okay, never. So I might have to leave and come back. It looks like my computer won't let me share and Till I restart zoom. Okay, that's all right.
Will I be able to get back in? I don't
Yes, I have now expanded, was able to upgrade the plan as zoo is apparently very useful for I was able to upgrade my plan and then it automatically allowed people to join me. Yeah.
All right. In that case I will be right back.
Okay. Yeah, sounds good. Yeah.
So now we'll be entering into the very brief stand up portion of the meeting, where we try to entertain you for a minute until Noah gets back.
If anyone has any really good linguistics jokes, please feel free to share them on the resources page.
I'm trying to think of any off the top of my head. There's really none that are coming to mind. So I'm trying to think of good Oh yeah, we had We had some jokes during CUNY about dog semantics. So like the question was, you know, what do dogs? What is their logical semantic representations if they have any or not? Right? So this is an interesting Jeopardy music. No, no jeopardy. I mean, I could try to Okay, no is back. All right? You guys are spared the continuation of the rambling and so I will now go back into let's see if I can unmute No.
Yes, I can
unmute it. Okay. Take it away. No.
Okay. So I'm
hoping you guys are seeing my slides.
And we can see it perfectly fine. It's like a just a full screen presentation and your videos on the top here. Great, great. Okay. So, um, I Alright, so thank you for the introduction. I'm no one else and like
William said, I just graduated into
Just a little bit kind of about how finding five works essentially, if you have some idea for a study, when you build it on the final five platform you write some code in a very simplified coding language that has a pretty low learning curve to it. The code itself is trying to model is trying to be modeled off of like the actual design of behavioral experiments. So we use terms like you know stimuli responses trial templates, blocks, procedure, etc. Just to try to kind of make this all more accessible and familiar to us, all right. And once you've coded it, you can launch a session of it to begin recruiting participants. And essentially you code the same experiment. And then you just select either want to launch this on the finding five platform, or I want to launch this through Mechanical Turk. And as long as you have your Amazon Web Services account all set up and ready to go, we essentially convert the study to the format it needs to be to be run on Mechanical Turk for you. So just real briefly, before I get into actually showing you guys finding five itself, I want to talk just a little bit about the experiment structure and finding five and just kind of highlight. So the parallels with a normal behavioral experiment. I mean, I don't have to explain to anyone here I think what a typical behavioral experiment might be contained. But I just kind of want to highlight that the components of a study and finding five are meant to mirror them explicitly. One of the few differences you'll notice is that we, we have component of finding five study that we call trial templates. And this is basically just for us to make it so that you don't have to code every single trial, you know, hard code, every trial individually, you make a template and say, you know, I want these stimuli distributed with these responses to make trials in this block. And that's just kind of tried to try to make it a little simpler and easier for our users. I think just to kind of cover sort of briefly, those sorts of things that that are covered here, like stimuli that we have, or just any stimuli that you can present to participants. So what we refer to as stimuli can include things like text or audio files, or images or videos. Pretty much You name it, and responses are essentially any opportunities for data collection that we make possible. And these include things like, you know, participants can do a free response in text or audio. So they can, as long as they give permissions to for their browser to access their microphone, you can record audio from them. You can have them do free response text, we have choice responses that they can click or use key presses, rating scales, etc. And then our trial templates are, like I mentioned, just templates for how to pair responses and stimulate together. And then we have a procedure section, which is what you might imagine we essentially say, which trial templates Do you want to put into which blocks and what order Do you want to put those blocks in for your participants to see. So essentially, you're defining these different components, configuring them together. And that's really all there is to it. And this is obviously a comprehensive logic and so some studies don't necessarily need, like every component in the maximum complexity, but we sort of compel you to use it anyway so that all the studies have the same underlying fundamental logic. So what I want to do right now is just demonstrate we have some demos that I want to show with a couple features. Let me see if I tap that share and change to
sorry, my zoom thing is in the way, how do I change tabs?
There we go. Okay,
so this is our website and our landing page. And as you can see, one of our big focuses is to try and make the entire experiment process something that you can handle directly just on our platform. There are a lot of alternatives out there where you might like DIY alternatives, for example, where you can maybe code your study in one place, recruit participants somewhere else and kind of run it over there. And we try to make it all sort of more of an all in one experience. And I encourage you guys to kind of visit the website and take a look and poke around, I think. I think we made a special case. So we are typically by invite only, so that we can kind of control who's coming in and starting to use our product. Not that we want to limit it from anyone but we do things like we manually verify that you are actually a legitimate researcher from a legitimate institution. For example. Right now, I think we've kind of waived the invitation code portion. So I think if you guys tried to sign up You should be able to make an account and get into finding five, you won't be able to launch a study on the platform or through Mechanical Turk until we do verify your researcher status, which we have to do manually. So that will happen later.
But just kind of a heads up there. Well,
I guess maybe zoom slowing down the internet.
Or maybe it's because we're all rushing in to sign up. Maybe that's why
I know there's some weird things sometimes zoom in interactions, web browsers or stuff like that. So hopefully, it's just the internet. Yeah.
Also, my personal Internet's probably not the greatest here at my house. So.
Okay, so right now I'm signed into a sort of administrative account that we have research at finding five but you can just sort of see the all researchers on finding five are also printing participants. So what you're seeing right now is sort of a landing page that a participant might see if they come to finding five. And there are some studies that are being offered for credit at the University of Arizona that are kind of dominating the main thread here. But as you can see, there are some other ones from other researchers that we can see. And it looks like not that many are active right now. But in our research page, just to kind of show you guys that the demos, right. This is where you would see any studies that you've made. Through our interface, you can see like we have a button to create a new study, when you actually create a study. I'm going to do this tokenize Text Tutorial for researchers. This is sort of what the materials look like that you actually code up. So you can see we have organization of trial templates and procedure and there's Kind of a tab to go back and forth to the different interfaces to work on that. And over on the right side of the screen, we have a list of our stimuli, which is searchable. You can add new ones manually, where you give them a name, and you define some attributes of that stimulus, like what kind of stimulus it is and what its content is. And you can also say, just define a whole bunch of stimuli in a CSV file, and just upload them directly, which makes it really convenient to do lots of stimuli all at once. And of course, you can sort of download and export your stimulus definitions that you have for purposes, say of sharing your materials with another researcher, for instance. And we also have responses as another thing, and it's the same sort of kind of interface that we have here.
So what I want to do is I want to actually
show you the preview so when you're working on a study in Finding five, at any point provided that your code can compile correctly, you can preview that study. And so we'll get a look at what this particular demo tutorial is about. And that's the tokenize text stimulus, which is a special kind of stimulus that we created to facilitate, like self paced reading, for example, and other
other studies of that kind of nature.
So, in this particular case, since this has a self paced reading component, there's keyboard interactions. And a lot of people don't know this, but Apple's Safari web browser, will not let you do that in full screen mode. So we have this nice little warning about that, but that's all this is saying. So just just a heads up that if you happen to be somebody who knows that your participants like to use Safari for some reason, this is something to be aware of. Another thing just to that We can point out, we always have a landing page here with some instructions, just sort of general, finding five why's instructions for participants when they take a study. And, for example, you know, we require that you have a consent form, in order to be able to run a study. And, you know, it's sort of up to the participant to actually read it, of course, and they have to actually give their consent to begin the study. This, of course, is a preview and so it's not really a real study, but it kind of gives you a sense of what would be happening.
just to kind of demonstrate this particular stimulus type, since this is just one of the many kinds of stimuli that we have. This little tutorial will help us explore a variety of ways to create engaging studies with this tokenize text stimulus. So as you can see here, a tokenized text stimulus is just a string of text that is broken up into tokens. In this case, it's designed to present text content in an incremental manner with or without interactions from participants. So in this case, it didn't require any interaction from me this was automatically set. Right? And then I don't know, I think that was pretty clear and biased. But so when we have this sort of automatic presentation, obviously, we allow you to adjust the speed of the stimulus, right. So we have a case with a slower speed and a case with a faster speed to kind of show you what, what that looks like. But we also have different presentation modes, right. So the plain mode just kind of presents the text on the screen as you've just seen.
The Mask mode actually
can do like masking of the actual tokens as they appear. In this case, we have a backwards mask. So they're all hidden after they're displayed.
It looks like we have other chat questions.
Yeah, I'm trying to monitor that the chat but I might not do the best job. Okay, yeah. And so we also in addition to masking, we also have like Singleton displays where it's one word at a time, and everything else is hidden. So then of course, we don't have to display these automatically. So in this case, this is self paced. So I'm going to pace it myself. Finding five takes care of the nitty gritty details, so that you can focus on research.
So reaction times for,
for that whole process are recorded. for you automatically. So you can implement a sort of self paced reading of tokenize text Emily under any of the presentation modes plain masked or single then depending on your study needs. And these are just sort of the some of the core functions of the tokenistic stimulus we obviously we we have more that it can do and so you can go to help finding five comm to look at our API documentation, and specifically our study specification grammar reference there, where we talk about how to define different types of stimuli and what they can do. Okay, I'm going to skip the this part since I don't really need to, maybe, since I don't really need to give myself feedback. Okay.
just kind of as a as a brief little interview just to give you the basics.
We have for example, what we saw was essentially two
that were defined in here to consist of various trial templates. And we have things like cover trials for instructions prior to trial templates, trial templates themselves and trials to come at the end of a block. And in this case is pretty simple because this is just a demo in a tutorial. But there are a lot of different things you can do in the procedure with say, ordering the trial templates, and you can order stimuli and responses within the trial templates as well. What I would like to do now, is, I think I'm going to hand over the presentation to my colleague to Patricia reader.
And, William, if you can unmute her and Patti, are you
ready to do a demo for the audio? Yes, I am. Before we do that, do we want to answer some of the questions in the chat? Yeah, if we have questions to answer I would love to. Great. Hi, everybody. Yes. So there are a number of questions that we saw. I think one of the first ones that occurred to me
was about precision, timing precision. So this one question says, what level of precision do you have for reaction time responses, compared to for example, psycho js, etc. Since for many studies, a lag of several milliseconds could be too much.
Yeah, so we are. Our sensitivity is on the order of magnitudes of milliseconds.
Great. Okay, another question. It's finding five, open to entering into third party data processing agreements with universities. One person said they asked this because their institution, the Arctic University in Norway requires us for GDPR reasons.
I think we would be open to it, we would have to discuss specifics. Obviously, that's not something that we've done yet. But I don't see why we wouldn't be open to such a thing.
Okay, um, another question is pseudo randomization of stimuli available for instance, Latin square structure and then they mentioned that this thing seems amazing.
Now, thanks. So we, we are sort of our view is we don't really feel like we need to buy just automatically do a Latin square for you. I think most researchers are pretty capable of designing such a thing.
And pseudo randomization can be a little tricky to automate. There are some things that we Some features that we make available. So just to kind of give a sense, I mean, we true randomization is easy. We do that quite well. But we do have, Missy, I think it's in the trial templates. I'm showing help documentation right now, by the way. stimulus patterns. So we do have like a pseudo random stimulus pattern that does random reordering of stimulus presentation, but subject to certain constraints, right. So you can define attributes to apply constraints and say, you know, I want at least this many of a certain attribute in a row, but at most this many, and we'll do randomization within those constraints. Does that answer the question? I hope?
I think we'll find out a few questions question, feel free to follow up on the Google Doc
One person asked if it's free. And Sharla has been doing a great job as well as Patty of answering on Google Doc. So thank you guys so much for doing that online. But it looks like the question was answered that is free. Is that correct? Yeah. So I Wow, I should have mentioned that. Yeah, finding five is free. And that's one of the I mean, there are definitely other platforms out there that do
much of what we do, but you typically have to pay for them.
Ours is free, we do have to kind of since this is entirely run by volunteer operation, just to kind of help us with server costs and stuff like that. We do have premium plans available with some perks, but all the core functionality and everything you need to get started is totally free.
And just to jump in on the Google Doc I listed the webpage where you can learn about the differences between membership tiers. So basic uses everything you could possibly want to do on finding five basically, premium membership.
We'll just bump up that ability a bit more
It's great. Okay, so
let's see how much customizability is there in the various presentation modes for example, underlining words different fonts, bolding, etc. or non Latin scripts.
really, let me show you what a tokenized text stimulus actually looks like. So, in here we have this long, while not that long, but somewhat long definition. So you can see we have like a definition of what the character is that we're using to mask in this case it's being
forward masked and backward masked.
We have it set to being self paced. The content is the actual string of text itself. And by default, whatever you put in here, so you can put any duty UTF eight viable stuff into here. And by default, it splits based on spaces. But you can specify other characters. For example, if you want. You can actually use a regular expression to break up the text as well. That's another feature that we allow for. I don't know. And then basically, we have parameters for things like the size of the fun.
I believe, actually, I've never now that I think about it, I'm really glad somebody asked this. I've never tried messing with the font itself. But we, I mean, we're essentially accessing the CSS based on properties that you give. So it looks like
And size by default, I don't know about font but that would be something that would be pretty easy for us to help implement something for it. Think what you could do, at the very least, because this should work. Because we take HTML tags. So you should be able to do things like span, style equals and define CSS style, right? For anyone who knows anything about each HTML,
That kind of thing, I guess it would have to be like this. Right? And you could, you could put style specifications in there.
But you might have to learn a little bit of HTML to make that work. Otherwise, this would be the kind of thing if you needed a specific font, and we didn't have a way for you to do that. You'd reach out to us and we would be happy to implement something like that. That's definitely generalizable and usable for a lot of people. Right. So I think that's one thing you wanted to underscore is the responsibility of your team to you know, there's something that's built in that they want that's not there that they want, and they can reach out to you and then your guests are flexible and responsive and trying to adapt. Yeah, absolutely. Yeah. I mean,
Finding five is as good as it is right now because people have used it and made suggestions. And so we are very onboard with that approach to continuing to improve it. That's fantastic. If you do, I'm sure you're going to be inundated in the next few weeks and months, you know, with all of a sudden all the people in the world doing eye tracking or sorry, a selfish reading or just gonna all of a sudden be on finding five. But that's a different problem. Okay. And so, one person had asked before about the turnaround, I think that Shiloh had respond to that, like how quickly how people are asking questions or asking for changes. How could we do get back? What's that kind of timeframe?
Yeah, I should probably be trying to glance at this document too. I'm not sure what Shilo said but we try to be really fast. I mean, usually. So you know, taking an eye for example, when someone sends an email to research help at finding five.com it goes directly to our phones and we one of us will usually answer Within 24 hours with at least a preliminary response sometimes, right? Sometimes our response might be, oh, this sounds like you are going to need a new feature. We're having a meeting tomorrow. Let us talk about it, we'll get back to you. But
we usually at least let you know that we're paying attention and that we care about your issue as fast as we can. Right, right. Much better than Verizon on your call.
I hope so.
So someone asked a specific question about the variability in our teas. Like what is the variability in reaction times that you collect?
Which, you know, I don't I don't know sort of ting would be more familiar with those kinds of details. I know that in my, in sort of my field as a linguist and people I've worked with that, I had kind of learned that, you know, you always want to use a button box rather than a keyboard because keyboards aren't as responsive and things like that. But having talked to Tim about this, apparently, a lot of those studies were done with much older keyboards and keyboards have improved a lot. So he's, he's pretty sure that the difference is is very small. I couldn't give you a number though. He's way more knowledgeable about that kind of stuff than I am. So if that's a question that's important to you, I encourage you to send an email to us either at the researcher help email or at feedback at finding five.com and I'm sure Tim would be more than happy to answer you. That's great. We could have a ton of questions. So there's definitely More for me to ask. I don't know, if you wanted to come back to questions or just keep going. Let's, um, let's come back to it. So we have two other demos that we want to show. So I think there's always a chance that some of these questions will be kind of implicitly answered through the process of going through those demos. So
why don't we hand it over to Patti? If that's, that's great. Yeah. So I just want to let everyone know that's listening. Your questions, keep adding them to the Google Doc, you know, we're going to try to answer every question during the live chat here. But even if we don't get to it, we're going to make sure that we get some responses into the Google Doc from the team from planning five. So you know, your questions will be answered regardless of whether we get to them during the live session here. Okay, so anyways, take it away, guys. Right. So the purpose of us going through these demos is to show you some of these stimulus and response types that we think might be most useful for this audience. So Noah just showed you tokenize text
I'm going to try to share my screen as well. Somebody in the Google Chat just asked about recording of participants voice or other kinds of audio. And we can do that. So let me just share my screen here.
Okay, so what I'm showing you right now Oops, sorry. Earlier No, I was showing you our API, our studies specification grammar reference in our help documents. And we do have a ton of different types of stimuli and response types. So one of the response types is audio. In the demo I'm going to be showing you we it's incredibly simple, we'll just be recording a participants audio response to a query on the screen. So you can see here this Is the researcher panel that Noah was showing you earlier, we again have a very simple procedure with just one block our trials there, I think there's only going to be two. So it's a fixed order of trials. And in our trial template, we are setting up some instructions for the participant. And then just two basic ways of recording audio stimuli. So let's dive in to
previewing the study. So I'm actually go back to soon.
So now you should be seeing What this will look like for participants. Again, just like Noah said, there'll be a consent form here, it's up to the participant to read the consent form. And we'll go ahead and participates.
Okay, I agree to the terms I'm going to begin
Okay, thank you for checking out this demo study on the audio recording feature.
Recording participants on finding five can be achieved by using this audio response type. Once finding five detects the presence of an audio response in a study, it will attempt to request microphone permission from participants before the study starts. If finding five detects no audio recording equipment on a participants computer, it will prevent the participant from starting the study at all. So here we go. The audio response creates a simple recording interface purchase can control it on their own. So it allows participants to record themselves review the recording, and confirm if they're satisfied with it. So I'm going to go ahead and record my speech right now you can see the volume bar is moving in accordance with the volume of my voice. And I'll say I'm done with this trial as the participant. Now I can listen to what I just recorded. I'm not exactly sure if through zoom, you're going to be able to hear this, but let's give it a shot.
Yeah, I don't think we can hear that. You can. Okay. Sorry. I was listening to my voice speaker. Does it sound wonderful? It sounds strange. As listening to one's own voice can be
why ray is good quality. the quality is fantastic. Yes, yeah. All right. Another feature of the audio recording, which can be particularly helpful is changing the padding on the recordings. So you might have a participant
Click the record button, and then hit the stop button, but they're continuing to speak. So if you anticipate that this could potentially be an issue, you might want to build in some padding on this response type. So for example, in this particular trial, we're going to be recording my voice here for a couple of seconds either participant, I'm going to hit stop, but in fact, the recording is continuing for an additional 200 milliseconds, 500 milliseconds, whatever you want it to be. So the audio file that you will get when you collect your data is actually going to be slightly longer than what the participant marked by hitting the stop button. So audio responses recorded in stereo compressed at a bit rate of 64 K. The fidelity is good enough for human speech. Each file You will receive as an OTG file, similar to mp3, you should be able to convert this if you need to, using any number of free tools or open it up in a tool like Audacity.
And again, our API has a lot of details on this. All right.
let me go back over here.
All right. No. Would you like to come back at this point in time?
We are. Oh, there we go. Okay.
Do you have anything you'd like to add? I guess I can really quick, just highlight that when you look at the API share this
Hear, when you look at the API for audio responses, you can see one of the things you can change here is the padding on it. Again, the default is 500 milliseconds after the participants had stopped, it will continue to record.
yeah, I mean, one thing I would add, I think
I, if I'm remembering correctly, we put a implicit upper limit on the padding because
we don't want our if
you had a typo, where you put in a really large padding, you would essentially be recording somebody without their consent, really, because they would think that they were no longer being recorded when they were. So we did put an upper limit on that. I think I can't I can't remember what it is. But that's implicit. But 500 is usually in our experience is a good number. We realized there were a lot of issues with the data where it was getting cut off, that a lot of researchers were happening happening and once we Put that in and suggested we kind of played with a number and suggested that number. those problems basically went away. So
and just to show how simple this is, when you build an audio response, all you need to do is create a new response type here and the researcher screen, set it to type audio. incredibly simple. If you want to add padding, that's an optional setting that you can have in your response type. Yeah.
Okay, do we want to answer more questions? Go on to the third demo, and
maybe we should do some questions about audio specifically that you
Okay, um, so
there's some questions of whether you can record voices. So we just established that yes, that works great. Let's see, I'm trying to find Could you describe how voice responses work a bit more? Can you advanced trials based on a voice response
like have their voice trigger a trial to be event?
no, we cannot have that as of yet. Yeah.
I'm trying to think whether there would be any
technical issues with that. I mean, as long as it's done by they actually press the record button, right so that we're actually recording them. I suppose it might be doable in theory, but I there might be some kind of technical barriers there. One thing I do want to point out just as a general bit of information for people who aren't used to collecting data over the web, your your the circumstances your participants are going to be in are not the ones you're used to in the lab. So that might be the kind of the kind of study design that is a little more difficult to, to do over the web people despite whatever instructions you might give, people will do these things in coffee shops sometimes.
Right? So on that point, follow
to that question was someone asking about two to three year old participants? And I'm not sure exactly the question, but they were asking again about the possibility to pantsing trials based on voice. But also, is it possible to have general audio recordings? I'm assuming what the question was is like, can you record like throughout the experiment rather than having it be recording a press to initiate recording?
So no, that's not something we can do right now. We can't just do a blanket recording throughout the whole thing.
I'm trying to think, you know, I that would, that might be the kind of feature we could certainly talk about it, but it might be the kind of feature that we would be hesitant to make possible in the first place. There's just sort of some some issues with how clear would it be to participants the data itself and transmitting it over the servers that could be a very big audio file and it depending on how many Participants you're having in your study that could cause some real server load issues that might affect your study. We certainly would be open to discussing feasibility and options for a particular study designer particular goal of a researcher. But yeah, just my gut reaction is that there might be a little bit of pushback on our end, we would want to at least talk about other options. Well, that that
I think, raises an important question that maybe we should discuss, and maybe allocate some time to. It's just, I think there's a lot of people that are not generally familiar, including myself with online experiments, and just one of the things that like, you know, you're gonna run into that maybe even obvious, but you just, you know, you have to sort of do it to understand like, the fact like you mentioned about the controlled setting, other kinds of issues that might pop up. So maybe, you know, if you guys think about it, maybe like, you know, 10 or 15 minutes just at the end to discuss that specifically. I think that's a great way to do it. Yeah. I think from that point as well, I think a lot of people probably don't know about Mechanical Turk payment and that sort of thing. So maybe that would be good to include in that as well. Now this
whole back discussions together at the end, I think,
okay, yeah, that's great. Okay. Let me see they someone said, I've got three plus ones on a question. You mentioned a beta online tracking system. Is it possible to integrate other web based data collection with the experiment and finding fine, boy, so I think that may be referring to the fact that PCI, Beck's has a beta tracking system. And I knew I had asked you this question as well before and you said that you don't currently have online web based training, right?
Yeah, we don't currently have it and we don't have any plans to do it in the immediate future.
We were just a little uncertain, uncertain about the general reliability and generalizability of such a paradigm through the webcam. But I mean, it sounds like if if NPC if PCI, Beck's is doing it, we would be, of course very interested in learning more about how that is going for them. Because if they're getting, you know, reliable enough data for people to do visual world paradigm research, then that could be something useful for us as well.
Right, in a visual world paradigm, you know, is much less requires much less precision, I think, rather than reading experiments or other kind of visual psychophysics.
certainly. Okay. So another question was, can participants see a visual stimulus in record their voice at the same time, like in a picture naming experiment? Yeah. And to measure and is it possible to measure latency, response latency from stimulus onset to voice onset?
Yes, so well, a little bit of that calculation would have to be your own. So what we can measure is the amount of time until they press the record button
and, and then
So what might be more challenging is trying to like, have them open the recording and then see the image and you measure that time. But I think we can do that we have a property we call barrier that allows us to define whether certain stimuli block other stimuli or responses in the same trial from appearing yet. So for example, it's very useful when you have like a video and you want someone to watch the whole video before they respond, right? You make that video a barrier to the responses. If you take that property away, you could in theory, you know, they could press recorded any time, you could have some delay on your image stimulus. You know what the delay is, and that measurement is very precise. And then all you have to do is measure the time within the audio recording before they started actually speaking. Of course, it's up to you if you wanted to actually do voice on set or the click of the record button because that clicked. might better represent when they're ready to speak.
That's a great thing you mentioned, right actually using the click sound, is there a way to, you know, just incorporate in the experiment, you know, like a tone that would play or something like that that would be useful for
that. So that's I mean, that's entirely up to you. If you make an audio stimulus that has a tone in it, you should be able to have that audio stimulus triggered when they press click, for example, but since each recording that they're taking is isolated, usually the tone thing is done for purposes of kind of helping the researcher separate out, okay, this is the stretch of time that I'm interested in. For us, your your users are usually going to be clicking record and stop. So that gives you a start and an end time to be interested in. But yeah, I mean, certainly if you set up your own audio stimulus, you could do the sort of deep paradigm.
We don't have such a such a thing currently in place now.
You know, that's, that's an interesting question, though. I mean, it depending on the format of it, it might not be that much work to translate it into finding five but I mean, probably that would be something we would ask the researcher to do. So.
Right. Okay. Another question. Is there a way to interact with stimuli on the screen like clicking on images to play different sounds and such Okay. Yes, for those that are listening only. Let's see. Let's I'm not sure how much we should expand here. I can you can present audio visual stimuli. We clarified that already. You can do self paced audio moving window presentation.
Oh, No, no. But that's very interesting.
That sounds like a cool feature.
Get in touch with us. I want to talk
to whoever asked the question about self paced audio moving window presentation, email these guys, and they will do their best I assume to try to implement that. It should be it's not too much of a technical burden, right to figure that out.
Well, that's something I wouldn't want to comment on until I talk to
the team, right. All right. All right. If someone's a really good programmer, you know, you can join finding five and help them add features to their platform.
Yeah, no, well definitely plug ourselves that way. We want more help.
And I'm sure you're gonna need the help and you're calling once here. Okay, and maybe we should move on. There's a more questions, but I think they're less specific to this kind of stuff. If you have another demo or more material to cover here. Okay.
so let me share my screen again.
What do we got here?
Oh, by the way, since I accidentally clicked on the wrong window. So this is a news app finding five calm and just I'll briefly say so we have, like blog posts with news about, you know, our community. We have a mission statement here, which sort of tells you a bit about what we're all about. And I encourage you guys to take a look at that at some point. And we also have here as part of our blog, we have some tutorials. Like for example, we have a tutorial on on like automatic creation of compensation hits for Mechanical Turk workers. So that's something we can talk about later when we get into Mechanical Turk, but this is just an example of a tutorial. We have. So just since I was on this page, I'll take this opportunity to share that.
if I can actually just jump in for a second while you're pulling it up now, we have a crash course on that news dot finding five.com, which walks through building a simple memory based study. And actually, if you go to research resources, Yep, exactly. This Crash Course is pretty straightforward and would walk you through step by step, all of the programming concepts, there are very few that you would need to go through in order to build just your very first simplistic study on finding five so that that would be a really good resource
to get started,
this is a little outdated. This is our old.
This is our old interface before it got prettier. But the, the information is all the same, it just looks a smidgen different and we'll be updating those screenshots. Very soon.
Is it possible for people to try to do that now and then get feedback from you? Sure.
Yeah. That should work. You should I mean, if you were, by the way, if anyone's trying to sign up and encountering problems, let us know.
If you're trying to sign up or getting problems, let them know and if you're able to get into their system you can try messing around with this and trying to tutorial and then if you have questions in real time while you're working on it, we can you know, get those questions to know it Paddy. And also we can also potentially set up some video chats if you need help to walk through getting started. Anyways, sorry to interrupt.
No, he's gonna show us a slightly more complicated grammar.
Yeah, and, sorry, plugging my computer in because I realized I didn't do that. Okay.
So, what I'm gonna demonstrate Now is conditional branching? Which, if that's not a term that you're familiar with, well, I guess I'll let the the demo get it started for you. So, um, conditional branching is essentially assigning participants into different arms or branches of a study based on some test condition. And usually this means like based on a particular response they gave, or based on whether they pass an accuracy threshold and a training block or something like that. So essentially, what it is is kind of defining different versions of your experiment that are conditional on participants experience or performance. So what this amounts to is essentially changing the trials that a participant experiences based on their responses to previous trials. And we have two methods of conditional branching sort of in the broader sense that we have
support for at the moment, we have a match method and an accuracy method. And these are the methods used to evaluate the condition upon which participants will be sorted. So using the match method, different response options are matched with different branches of the experiment. So we're going to try it with an alternative forced choice trial.
based basically, this is just a total toy example, obviously, but depending on whether I choose the left button or the right button, I'm going to get a different version of this experiment. So if I press left, it actually tells me Well, you clicked on the left button. So you're seeing this. Obviously, if I had picked right, it would be a different trial. Right? And so this is the essence of conditional branching and how it works. This same Match method can also be applied to, for example, or rating response. So, here participants are sorted into one branch if they select either one or to a different branch if they select three or a different branch if they select either four or five, so you can see how this might be useful under, say, survey conditions, for example. So I don't know I'll pick two. And if you notice up here, it said, branched into branch a at the top of my screen. I don't know if you missed it, it was there shortly. I'm currently in preview mode. So I just want to highlight this preview mode is for the benefit of the researcher testing their code, participants would not see that right, they're not going to be told that they were put into a particular branch. But in this case, I defined something that I called branch a and that was contingent on that response. So here since I picked a value of one or two, this is the message I'm getting. We also have the accuracy method, of course, and when we use the accuracy method, participants are sorted into their branches based on whether they pass some accuracy threshold. So what I'm going to do is I'm going to have five AFC trials in a row in a row and an accuracy threshold of 80%. Right? So I have to get four to five. So in this case, for the purposes of the demonstration, I'm telling you explicitly what the correct answer is, right? But as you can see, if I say choose the wrong answer,
I'm not going to pass that threshold.
So I did not get four to five, I failed to pass the accuracy threshold. If I had, I'd be getting a different branch of the experiment. Now one way we can use this, that's not actually in the demo at the moment. I created this demo on short notice, so I didn't have time to do this. But one thing you can do, one thing that's really useful for training in an experiment is you can set it up so that if somebody doesn't pass the accuracy threshold, they essentially just take the same block again They go through, you know, are the same series of blocks again, for example, so they just go through the training again, they do the test again. And you can evaluate them, for example, based on their overall performance across all their efforts, or just on their last effort. You can set a number of iterations for how many times they can try to do the training and the test and so on. So that's, uh, oh, I probably shouldn't have done that. So that's conditional branching. I think maybe, before we turn to questions, well, actually, let's turn to questions first in case I'm sure people are interested in this. And we'll turn the questions first, and then I'll show the I'll show some results files after that.
Okay, so let me see if I can find any questions, particularly about the branching.
don't see any if
if somebody can find those questions on the Google Doc I'm trying to see.
So I mean, it may have just been too too quick.
But I mean, I think this is a pretty powerful feature.
it you know, it took us a while to develop, but we were pretty excited when we got it working.
Just Why don't actually what I should do is show you kind of how,
how it sort of works and our procedures. So you can see that it's not terribly complicated. So this particular design that I did here has a lot of blocks because conditional branching and finding five happens exclusively at the block level. We did this just sort of, because we were looking at for example, Qualtrics has a bunch of different kinds of conditional branching ones based on an individual response that happens at a trial level where you know, the test condition is a trial. And the branching outcomes are single trials. And then separately, they have something else that's more of like a group of trial levels, something else that's more of a block level and so on. And we just thought, you know, it's not that much work to take all of this stuff and just do it only at the block level. And it's actually easier, I think, for the researchers this way, because when I was trying to learn how to do this in Qualtrics, it got very confusing very fast.
So to give you a sense,
the first evaluation condition that we had was a matching match using the matching method to evaluate the condition on an AFC trial. So I essentially just made a block with my trial template consisting of my FC trial in it. And what we do is we just define this branching dictionary here. And that's what tells finding five that This block of trials is to be treated as the condition for conditional branching. I set the method to match, I set some triggers, like which trials within this block we want to actually use to trigger the conditional branching. In this case, there was only one trial. So that was quite simple. But and you can even say, you know, I want all the trials in this trial template, but only the ones that have a certain response in them. Right. So that can be something that you can do to have rather complex trials, where you actually evaluate your conditional based on just one of many responses that participants give. And then we just define the actual branches themselves, which are then specified in the actual block sequence of the experiment. So in this case, this is that block that evaluates the condition. When participants complete that block, finding five knows what their responses were, and what branch to assign them to. And then we have this dictionary here that defines those branches. And as long as the names of these branches match the names that I gave in my branching dictionary, then finding five can handle it, and we get conditional branching. And so in this case, they see different blocks, depending on which response I gave. You can see the same thing for the rating block, we have the same kind of strategy branch a, branch B, and branch three. And also for the accuracy block, right, where I made blocks that I called our branches that I called pass and fail, right? Which I can just kind of show you what that looks like. Right? So in the accuracy condition, it's the same basic structure as the other block except in my branching dictionary here. My method is accuracy. I have a minimum score that people have to reach in this case it was 80%. So the 0.8 define triggers in exactly the same way and in the case of inaccuracy By you evaluate your branches based on a true or false condition, did they pass this threshold?
So this is like a really complex choose your own adventure, situation experience for your subject.
Yeah, although they don't know that they're choosing their own adventure in most cases. You obviously whether you want to alert them to that fact is up to you as the researcher, but in most conditions they wouldn't know. Yeah.
Okay. Should we do some more questions here? I don't know. Okay. Yes. So someone asked, Would you use conditional branching to pseudo randomly send participants to the lists or versions of the experiment in between subjects design?
No. So we have a different feature called participant grouping, where basically we will automatically group group participants into two different lists, conditional branching is going to always be conditional on the specific response. So it's dynamic from within the actual Whereas assigning participants to different groups is something you want to do upfront before the experiment begins, right. So as soon as somebody joins the study, you want to assign them to one of those lists, right. And that's done through participant grouping, which actually looks very similar. So I don't have to do any kind of special blocks or anything. But I can just define, you know,
group one, and they get this
list of blocks.
And group two,
gets some different list of blocks. And that would be how I could just automatically assign participants to different groups. What finding five will do in this case behind the scenes is the first participant who joins the study will get assigned to one of these groups randomly and then the next participant will get assigned to the other group and so on.
That's not simple. A
made a demo for anything but it's pretty straightforward, I think.
I think we have some other questions about other aspects of finding five. I don't know if you want to answer those now or wait.
I don't know. Patti, do you have any thoughts? Um,
I guess, would it be worthwhile. We have Shiloh Drake also here who has conducted research using finding five I thought she could talk a little bit about what the results look like, how simple it is to actually look at your data, which is why we're going through all of this in the first place.
I actually think that would be wonderful because there are some questions about data storage and access and things like that. So maybe that would be perfect to address those questions as well.
Shilo, are you ready?
Okay, let me see if I can unmute her.
They will unmute Shiloh if she's ready. And you fellow among others have been very helpful. editing the Google Doc online and formatting it nicely and adding the responses to the questions and things like that. So thank you so much for that as well.
So let's see. I've been answering questions on the Google Doc By the way, it's just someone who's like us fighting five and I'm not affiliated with the team in any way. Just so you all know.
So I guess my my role right now is to
show you what the results end up looking like. So I'll just pull up a CSV file that that one of my latest studies from finding five generated
So you should be able to see my Excel file on the screen right now. And what this has it's an it's a CSV file before anything else before I did any of the cleaning of the response or of the of the file. So it's got everything from, like the idea of the experiment over on the far left column to weather. Let's see, this is weather I wanted them to use or that type of response that they're giving. So in this case, I'm only using key presses, this is a reaction time experiment. So finding five recorded their response reaction time and this is in milliseconds. And they also knew you can also see that this is the trial template which which trial are these in?
Are there are there any questions? On a Google Doc For what?
For like, what?
Sorry, go ahead.
Are there are there any questions for like what else? We'll
have won a lot of interest in what the data file output looks like, which you're showing now, which is great. And we
should say that data is sent to you as a CSV. Yes. Audio responses, like I said earlier Oh, GG files. But this is exactly what you'll receive pretty much.
It is convenient that it comes as a CSV if you're going to be importing it into our for example,
right into our or just manually examining it in Excel, which is great. So there was question about data storage, so in which country or the server storing the data located and if there are multiple can we choose them? And this is because some European ethical privacy regulations could be a concern there.
Now, where would you like to speak to that?
Okay, can you hear me? Sorry? I think assessor that I can only unmute you So, okay. So I will no longer mute myself when other people are talking.
I'll just be quiet.
Okay, yes. So we have servers on the east coast and the West Coast of the United States. I do not. Yeah, we do not currently have any server locations in Europe. This has been brought to our attention before. But we had like one single lab that had an interest for which this was an issue. And so it wasn't Really, you know, we pay for our team pays for these servers out of pocket basically, unless we get donations. So
that, unfortunately, is a restriction we have at the moment, although
we are currently in a stage of expansion, and we're trying to get more people involved. And so if if this platform seems like something that you want to use, and you're willing to donate a little bit of money to help us with server costs, that would certainly facilitate us opening servers in Europe to to expand into that market.
because I think another question was just in reference to our IRB. So it might be useful to include some links in the other document to publications, perhaps, I'm using finding five that maybe people can look to how to navigate to the IRB there, but um, it's a good question. I guess maybe this is good for Shiloh. Like, what was the situation for navigating IRB using finding files
This is I mean, my this is for When did I run this study? This was
oh yeah, this is run this was run while I was
a visiting researcher shortly after I got my PhD at the University of Arizona. And they're the linguistics department had a had an in house IRB. And as someone else mentioned, on the Google Doc, this is also the university from which can forester de master DMD came from as well.
So we're used to reading a lot of online
and remote data collection studies. So I think our IRB was like they said, Okay, yeah, this is just another remote Experiment. And because all the all the participant ideas, I'll expand this column
participant ideas look like absolute gibberish.
the confidentiality of their responses is matched by that. And you can really only find you can really only find who's done the survey by or who's done the study, sorry, by looking at. Or I think you have to have an option to collect the emails of someone or the names of someone, but they're not associated with individual responses, which usually has satisfied IR B's in my experience.
That's correct. Yeah. So yeah, usually since we since we automatically generate participants From the researchers point of view, it's totally anonymized unless they ask for emails because they need to do compensation or study credit or something like that. But I understand there's a concern about, you know, we have that information on the back end. But, I mean, we use a third party resource who are quite secure, their servers
very secure, because it's a very big, it's CloudFlare. And they they handle a lot of server storage for a lot of big companies. So in in our experience, my experience I mean, at the U of A, I should say, and helping other people do studies and finding five I haven't known of anyone who has an issue, but like Shilo mentioned we have an in house IRB and they are they're sort of aware of the fact that the actual personal sensitivity and a lot of this data is limited and the servers are secure enough to say by them. I should mention
that, you know, we're kind of in uncharted territory now with this pandemic and more researchers are probably going to be conducting online research at universities and in labs, or that hasn't been done before. So I imagine that, you know, the finding five team would be willing to help if your IRB is having difficulty understanding what online research is all about, or needs more detail about our security features.
Right, I'll point out that before finding five there's been plenty of online research using Mechanical Turk and other platforms. So there are plenty of publications and precedents. If you're worried about those issues, then that's a one question with a plus one is how do you export the results? I'm not sure if you quite illustrated that. Is that automatic?
Yeah, so one thing we haven't talked about. So when you do when you do a preview of a study, you download your previews. results right away.
But that's just for kind of the researcher,
you know, testing out and piloting for themselves. When you do an actual an actual study, you run what we call a session of that study. So, you code up your study, you decide you want to run a session, you can do it on finding five or Mechanical Turk, you'd say how many participants you want to recruit, etc. When that session is completed, you can download, you download a single CSV like this one with and that's why there's participant IDs, right with all of the participants data from that session. And essentially, there's just a button that you click to download that data and it comes to you in this CSV format.
And I can actually I can stop sharing my screen with Excel and started with I've got got Firefox on in the background with like where I'm where I can get the union Get the CSV file. So here, you can you can see that I ran three separate studies or three separate sessions of this study. And I've, you know, I'm looking at just the ones that are finished. Right, so I can I get this button said, batch download data. And if I click that button, it'll say, oh, yep, here you go. Here's your data in this nice CSV file in this like very large folder downloads that I never clean out.
Yes, we all struggle with that
So just talk a little bit through what this screen is actually showing you, the researcher. Over on the right hand side it says platforms. So that's going to tell you if you're on Turk or deploying the study through finding five in the lab and participant Statistics. So how full is the study? If you wanted 50 participants? Are you done completing or collecting 50 participants worth of data?
I'm sorry, william go ahead.
No, no, that's, that's perfect. So another question was, can finding five generate a link for studies that we can post this link on a crowdsourcing website with nice Mechanical Turk?
Yeah, so if you were to say try to send a link to your actual study, the person who were trying to look at that link would not be able to access it because you're the only person with readwrite authority. So but if they have a finding five account, you can add collaborators to a study. So you can see on Shilo screen on the left, there's a collaboration button or tab. And if you actually shy if you can click on that, you can just see like she has some collaborators. Tang, I'm assuming was added as a collaborator to help you troubleshoot something. Yeah. And he says an actual reason Trick collaborator, but you can add new collaborators who are part of finding five to a study. And then they have, and you can, you know, affect whether or not they can edit the study whether or not they can run sessions. And this will allow them to have access to everything. That's as much as we do right now. But we do have plans to as you can see, there's also a findings tab and right now that we don't have this set up, but this is something that's on our radar that we really want to do to try to make this a place to facilitate replication of research and sharing of research, both within our platform and with other platforms such as GitHub or OSF.
So now rather than the so we're talking about collaboration with other researchers. The question I think, was also aimed at could you use a link to the study on finding five to recruit participants elsewhere? Oh,
participation through a link is a feature that we have. Yeah. So you can To say email, the link to participants, they have to, if you're running the study through the finding five platform, they have to create a finding five account, a participant account, or if they want to research or one researchers can participate in studies as well. So they would have to sign up, but they can use the use that link to participate in your study.
Right? It can't be embedded right now without either Mechanical Turk worker account or finding five participant account. But you can definitely send the link to your study to whomever you want.
That's great. And then that's I think, another question was, can you redirect to a URL or URL at the end of the experiment like Qualtrics? Because of subject credit, compensation? Yeah, like a subject cool.
yes, you should be able to add a URL into the study whether whether we have it set To make this clickable I've never actually done this before, personally, and I haven't actually thought about this, but I believe we can do that. At the very least, if we can't do that, at the very least what we can do is, you, as a researcher can ask for participant emails and when somebody completes a study, you can just send them that link to through the email, at the very least,
is that we've done in the past Shilo
in the past, what have I done? In the past, I collected emails and set it up that way. I've also done trying to remember what was what we did here. I think this one was just for course credit. So they just sent me the email and at the UVA had a system that collected all of the participants. And so I could just click off a box and say, Yep, you participated. Here's your course credit.
And just To reiterate that email is not attached to their data
so they can maintain anonymity. Right? Yeah, I have no idea whose data goes to whose responses or whose email goes to whose data.
on one, just clarification question. So is it the case that participants are required to have either of finding five account or Amazon inter account? Is that correct? Or is it you must have a finding five account account regardless,
either so it's gonna depend on on the platform that you use to launch your session. If you do it through finding five they have to have a finding five account. And by do it through finding five I don't mean, you know, did you make this study and finding five when you go to launch a session, you can say I either want to use finding five itself to recruit participants like well, to as the platform not to recruit, you have to recruit them, but I ever want to use finding five itself as the platform or I want to launch it on mechanic Turk and if you do the Mechanical Turk option, they do not have to have a finding five account, they, in fact, your participants will probably have no idea that they're using finding five on any level.
So I'm trying to update the response times where I forget. Okay, so that's great. Thank you. And I think if I don't ask this question, I'm going to get some rights here. In terms of, again, the question about eye tracking or webcam based data collection, can you at least record webcam data? If not use
online eye tracking? No, we don't have any functionality built into our platform right now to access the webcam at all. I mean, that's something we could do, but we've just been hesitant to do it so far, because I don't know it gets a little hairy and we just weren't sure that the data quality is good enough to be worth it at this point. But like I said, I mean, if others are doing it in their quality of data is sufficient for are typical researchers, researcher users, then we would definitely be interested in pursuing that.
And here's this is one question from me that I know. Other people will probably be interested in, in terms of video presentation, stimulus files or videos.
Is that that's perfectly possible or is it perfectly possible?
Yeah. We recommend. This is something we can talk about. I mean, we're sort of running a little short, I think on time, is that right?
Well, yeah, we're slated to take a break at 1215. We could probably go over a little bit. we've exhausted most of the questions, I believe on the Google Doc. So that's good.
So I don't know if people are already doing this. But we could, we could maybe turn to our discussion of like, online study best practices and mturk integration.
idea yeah. Yeah. Cuz I, we could do that briefly and maybe have a little time at the end for people. To play around with the platform, right, and you should feel free, I think to play around with it. And we should be able to grab someone to help you afterwards if possible during the session. But yeah, I think it's a great idea to try to get through that.
Okay, sure. Okay. So just really quick to answer questions about, you know, can you present video or can you collect auditory data from a participant? I just want to refer folks to our API, Help Desk finding five calm, you'll be able to see all the types of stimuli you can use like text, stimuli, images, playing audio clips, to the participant playing videos, and then tokenize text what Noah showed earlier. And then in terms of responses, again, you can collect text box responses, you can do some of the types of responses that Noah was showing earlier, like a choice response left, right. Yes, no, a rating scale. one to five, one to 10 or a recording the participants voice and audio response. So check those out to see some of the features. And as hopefully, it's become clear, we're really excited to hear about new things that people want. So please tell us your ideas for new features, and we'll see what we can do.
Noah, did you have anything you wanted to add to that?
I guess, you know, while we're talking about all those, just say,
I think Patti is going to jump in and talk mostly about mturk. But just because it's related to this. One thing that a lot of people who are new to online research don't really think about is sort of the the size of your stimulus files and how many of them you have is going to affect the quality of the study, because each one of these files has to be loaded individually on the browser for each participant. So we absolutely can do video. We recommend that if possible, they be compressed right or that if they're not compressed that you limit the numbers to some extent, we have some functionality built into lessen these burdens. So for example, we can make it so that when participants are trying to take their study, finding five will only load, say the first two blocks first and then after those, they'll separately load the next two blocks to kind of prevent the browser's from holding on to too much data and having that slow down their experience. And these are things that are within the researchers control, but it's just something I wanted to put out there while we're talking about stimuli that we've had instances in the past. Like we had somebody doing a phonetic study where they had, I think,
I think about 800
audio stimulus files that they were using for their phonetic perception study, and they were all uncompressed and they were all being loaded at once, and so They were very curious to find out why all the participants were reporting that their study was crashing. And so that's the kind of thing that I think a lot of researchers Don't think about. Because when you have people come into your lab, you know, that stuff's pre loaded on your computer. And so there's no loading required. But online, every participant is essentially downloading those files when they take your study.
So that should be of course, be of great relevance to sign language research as well.
Go back to them here.
Okay, so we do have another tutorial on the steps you'll need to go through. First, if you are going to recruit participants through Turk rather than sending them a link, and just having them do the study on finding five and figuring out how to compensate them on your own, you'll need to go through the steps of creating an Amazon Web Services account and creating a new I am user or identity and account management user. I wish I could show you this process. It's pretty straightforward. And we like I said, we do have a tutorial on how to do that. But I'm just going to leave that to you guys to figure out you do need a credit card to set up that I am user. So just keep that in mind. Once you set up your Amazon Web Services account, and your I am user you're going to get an access key and a secret. When you go to finding five, you'll have your own account profile. So this is my personal profile here. And when you scroll down to the bottom, you'll see that we asked you for your AWS access key and your AWS access secret. So that information that Mechanical Turk gave you, you'll just pop that right in there. And then once you do that, you will be ready. Here's just a study that we have set up, showing folks how to use the barrier feature. It's a tutorial you can find on our news dot finding five calm, but if you go here, like Shilo was showing us earlier, you'll see a sessions tab. I can click on sessions. We have no active or scheduled sessions for this particular study, but I'm going to create a new one. And rather than doing the study through finding five, I'm going to actually do it on Turk so that I can recruit participants from around the world. You'll see a pop up menu here that's going to ask, Do I want to start the study in sandbox mode? Or do I want to go into full production so I'm actually collecting data from participants. One best practice is to always, always try your study in sandbox mode. First, it ensures that workers don't get frustrated when there are errors in your study. glitches pretend we did that already. And we're now moving on to production. So we are ready to set up a session and collect data from participants through Turk. You want to create a name for your study that is informative, recognizable, typically short description that sounds enticing to workers. And you can give this particular session a name. I don't remember a Shiloh session list actually used names for her different different studies. Okay, you can do that. Otherwise, finding I will assign a selection of numbers and letters to give you a session name. But you could call it you know, first set of participants. And then some details on what the embedded window should look like the size of it. Next, you'll be asked additional questions about the participation restrictions. So how many participants Do you want to run in this particular section session? What's the estimated duration of your study? Do you want the study to timeout? So if a participant walks away in the middle of completing your study, should that session or should that hit for them stop. You can also set different features like blocking participants who have completed a past session of the study so that they can't keep doing your study over and over again, block participants who attempted but failed to complete a pass session. So perhaps those folks that timed out maybe now you Want to make sure that they can attempt again, and ensure that workers have to be over the age of 18? All right, we can do geo location restrictions for Turk. Try go through this really, really quick. If you have other study sessions on your user account, you can restrict participation in this particular session based on what they've done before. So if you have a multi session study, you can make sure they've done part one before moving on to part two. Likewise, you can block participants who have completed part one from participating in part two. And then compensation. So in terms of best practices, there's a lot of papers out there that I can share links to about, you know, guesstimates of how much you should reward participants in order to ensure that you know, you're being fair and ethical
going very quickly here, you'll also be asked to select a consent form. These are just the silly consent forms that I've uploaded to my account. But you'll want to include a consent form that you have approved through your IRB, you'll get to see a preview of it here. And then we have some information about sharing data with us. Once you go through and agree to all of these features, and identify the different restrictions you want to have on your participants, you're ready to go and your hit will be active relatively soon.
To be clear, though, you don't have to agree to any of those to launch a session.
Great. Okay, I take it back.
Any of these three things, right? This is stuff that we're asking for, if you don't mind so that we can learn more about how people use the platform, what's successful and what's not successful, especially in terms of Turk integration, right.
So again, in terms of benefits, practices, we really strongly encourage you to try running your study first in sandbox mode. If you're using Turk, consider the possible distractions a worker might encounter when they're doing your study. We could have a participant who is walking away from their computer every five minutes or has loud noises in the background. So consider quality assurance when you are preparing your study for online deployments. You'll want to have really clear instructions in some way of ensuring that people are following your instructions. Keep in mind, folks are not doing the study in your lab. You don't have control over the distractions that they may experience. So we have some features and finding five that can hopefully help you overcome those issues. We have catch trials, which we strongly encourage you to use if you're doing if you're deploying your study online. You can identify how often you want these catch trials to occur, but they basically would be ensuring that the participant is not just clicking yes to everything and they're actually paying attention to what's occurring. Though his description of conditional branching earlier also could be a way for you to engage in quality assurance, and ensure that participants are using the kind of accuracy you want in your study.
And I think, can I jump in with one little thing I know there was, um, in in retrospect, I should have asked her if she'd be willing to say something or prepare something to for me to say. But there was one of my, my entire close colleagues from shallows days at the University of Arizona did actually did a study where she specifically she did it in finding five and she specifically wanted to see how participants performed in the lab versus over Mechanical Turk. And in her case, I mean, despite all of these warnings that we give, right, I mean, they did consider these things when they design the experiment. But despite these warnings, actually, the Mechanical Turk workers performed better than the undergraduates at the University of errors. on it. And they just got a little bit cleaner data from them than they did from the undergrads who, despite our published studies should take you should take these issues seriously. But don't be discouraged online data can actually be pretty good.
Yeah, lots of really nice published studies comparing in lab data and Mechanical Turk data. Just one last thing we might want to mention is that currently, the only way to automatically pay your participants is if they are through Mechanical Turk. However, we are going to be implementing that feature a through finding five hopefully in as few as a couple months. So there is the possibility for you to not have to have your participants create a worker account on Mechanical Turk in order to be automatically compensated.
Okay, so I think That's all the questions. I mean, it was one random question. I don't know if you're answered already this finding five uses JS like,
no, yeah, we we, we don't use any sort of third party packages of that sort. I mean, you know, experimental kind of stuff. No.
Let's see. So one person is forming a question right now. So this is like a real time question I'm relaying. Any chances some good example papers for comparison between em Turk, and I'm assuming they meant like, conventional data collection.
Didn't you read? Oh,
yeah. Um, let me know.
Let me get some citations. And I'll put them
in the Google Doc.
Exactly. So there's another Google Doc. It's a separate one, and I'll link to it now. That just has a list of resources that you know, feel free anyone else to add to that, but there will be some references there for you if you're interested. Thank you. Let's see, are there tools for embedding ghost links in trials to weed out AMT bots?
That's an interesting question.
No, we we have not done that. Although, in our particular case, we've, I mean, we've kind of been monitoring, we were expecting bots to be an issue and it kind of hasn't been for our research or users that I'm aware of. It's not something that's really been an issue for us so far. So we haven't really done anything like that.
Right. Um, that's great. Let's see. II don't know if I missed any questions. But if I missed one, feel free to reiterate that question on the dog. So that make me make sure to catch it. But um, one thing I maybe was curious if you know a little bit about the demographics of the inter community or anything like that.
There's also published papers exploring that each year so I can include links to those
fantastic. Um, so yeah, I guess we're getting close to the end here. If people still have questions, please add them but I think might be a good place to stop here. So with the the end of the session, I'm just curious again, like if people are trying this out, they were looking for help or assistance, they just contact you through their normal, you know, information that's listed on the websites.
Is that just the way to do it there?
Okay. Yeah, I am, depending on the particular things that you need. There might be a email address that feels more suited but at the end of the day, all the different email addresses go to the same people.
So yeah, any of them that you find that you send an email to we will get back to you.
Okay, well, I want to thank you guys tremendously. Thank you so much. Again, this was last minute for me but even more last minute for you guys because I had like an extra day on you to get the star And thank you so much for jumping at the last minute and doing this. And I can imagine this is going to be useful to a lot of people. Again, this recording will be posted online at some point, hopefully very soon. Let's see, what else did I want to say there, I just am imagining that this and so many people are able to collect data, there's going to be a surge of interest in this plus, I imagine also, unfortunately, a surge in people that are using Mechanical Turk because of meeting money. So that's, that's going to be hopefully consistent there. What I'm gonna do now is I'm going to try something, I have no idea how this is gonna work, but I'm going to try unmuting everybody simultaneously, and then asking them all to give a very big round of applause for our presenters. So if anyone out there is listening, please just get ready. Okay, I'm gonna unmute you all. And then once I've unmuted you, I want you to start applauding and then we'll very I'm just very curious to hear how this is going to actually sound and my dogs are gonna freak out. All right, ready? 321
Okay, my dog just barked. So he's upset that this happened. He's mostly upset that I clapped. But thank you all for participating that um, so I think now we'll do is we'll just stop for half an hour for lunch, come back at 1245 we're going to have some presentations on neural imaging and other sorts of data like in aphasia and other sort of a typical population is in terms of datasets that are available already online. And then additionally, kind of a discussion about how to go about just asking people for data, which maybe is the easiest and quickest way to to get data in the short term, which I think is quite possible. Noah and Patty, is that okay, sorry, and shallow. If you want to come back, that's great. But don't you know, feel obliged, you know, and I'm sure you've got busy, plenty of things to work on. So we'd love to have you again, but you know, no worries about that. That. And so I guess Oh, sorry, I think I ended up meeting you guys. So I'm gonna try unmuting you. Okay? Yes. So, um, Oh, see everyone back at 1245 and I will be sending out information about where to find the videos once they're finally posted and available. Okay, and I will see you guys at 245
Thank you, everybody. Thanks everyone.
And I will stop sharing.
Okay, good. Okay, so it's 1245 I think people are still just starting to file back in here. It looks like the number is increasing again. That's great. Just to let everyone know that didn't hear before we actually had a great show a turnout for the first session we had over 100 people in this We'll all be posted online if there's a recording this going automatically for zoom. And it's a great tool. By the way, for those of you that don't know yet, zoom will automatically record all of your session, and then it will post it. And then you can download it and edit it and do everything else that you might need to do.
So now up next, so we just had a long workshop on finding five, which is this great platform for doing online data collection for psycholinguistic experiments or any other kind of psychophysical measures. And now I'd really love to welcome Florian Schwartz. And he graves gave some great presentations at CUNY and is also kindly agreed to talk about a PC ibex, which is based on ibex is an online platform similar to finding five but been around for a bit longer, and it has some perhaps more flexibility other features. So, point is going to just talk about that for about 15 minutes or so. And again, if you have questions, I'm going to direct you to this online Google Doc that we have. I'm going to copy the link instantly
Put that back into the messages, the chat box down here. So if you have any questions, please go to the Google Doc. And just there should be a section for for Ian's presentation there PCI VIX overview and just go ahead and add your questions there. And we'll do our best to do those out loud. If you put your name I can unmute you, and you can ask your question in person, but I'm happy to ask that for you. And if we don't get to your question, it will always be answered on the Google Doc by the end of this. Well, at some point in the day or so we'll get the questions all answered. So with that, I'd like to welcome for you.
And it has sort of an illustration of what this comes in a tank which is perfectly square.
So we have some images displayed here we have this text, which just for fancy illustration, is actually unfolding with the audio that's being played back. And you can now select an image by either running a word which is extremely sparse or by using the mouse. This
is in a pen, which is strikingly red
are possible. I'll stop going through the trials. There's just a few Have them to just give you a rough sense of what the code behind the sort of experiment looks like. Here is the core trial templates. So in PC ibex, you can work with template trials and then feed all the information from a CSV file. So you basically create elements, the core syntax of ibex is such that you create elements like a text element, and then you can modify its properties. And then you can carry out actions like printing it onto a screen, you can set timers, you can this sort of thing is called a selector in order to be able to click on parts of what's displayed on the screen. And this bit here, and I'll show maybe a little bit from the tutorial with more elaborate code is an extremely useful feature for anybody working with some sort of elaborate visual stimuli and the sort of what we call a canvas element, just like a painting canvas. You can very freely Ctrl, where to put images. So let me just scroll here.
I'm sorry, to
the illustration of the canvas element. So you basically create a pixel area on the screen or on the browser window rather. And within that you can put images and text and anything else visual, wherever you like, you can overlay things. And this is really a great tool for visual stimuli. And a lot of limitations. And this is true of the old ibex and other platforms as well. So that putting things on the screen where you want them is hard because everything gets executed sequentially. And this basically printed out line by line underneath each other, this canvas element, and basically gets beyond that and gives you complete freedom about where to put things. And so that's an extremely useful feature here. Um, let me just sort of maybe We highlight a couple of other things about the platform, we have pretty good integration of various recruitment tools. So we use this a lot with prolific which is a great online recruitment for scientific purposes, an alternative worth considering to Amazon Mechanical Turk. And we also use it with Sona, which many universities use for internal subject pool. So we get participants, for course credit through that internally. And all of this is fully integrated, and it's all described here. And even, there's even a section here on how to extract the data from ibex PCI VIX and analyze it in our so there's a lot of code scripts here. Something else that's worth mentioning, is that there's a really good documentation and support. So there's a forum on the website. And Jeremy very carefully tends to this many of your questions may already be answered there if you check there and so You can post comments there or send them to us at the email address that we have here. Let me perhaps in sort of closing, and maybe we have even a minute or two for questions otherwise.
also mentioned that
there are some more advanced capacities that are not sort of ready, boilerplate if you get something on PC, but we do have access or can give access to people's both microphones and video cameras. So for linguists, if you're interested in production, that may be something that's useful, you can actually very easily do audio recordings. And ultimately, you can also do use the webcam cam and in principle, the capacities are there based on that for allowing something like web based eye tracking. Now this is nowhere near ready to go. And we've actually be very happy to have other people chime in if they want to sort of get involved in working with But it's basically working like any other eye tracker, it tries to identify features and the visual input from the camera, as you look at different parts of the screen. Now, one shouldn't have any illusions about the accuracy of this, you might be able to something like are you looking at the left half of the screen or the right half. But for many visual world type studies in linguistics, for example, in psycholinguistics, that's already really useful. There are issues with that, you can imagine that that involves a lot of data storage and data storage is an issue for the results as well. So on the PC ibex form that we offer space is limited. You can easily set up your own farm. This is true of the old ibex and of our PC ibex farm, as well. Everything is open source, and the code is freely accessible, so that way you can get around things. But here we are, of course, looking at much more advanced types of features. So we're really excited to get this out there. We're very happy to answer any questions if we have a couple minutes, William, maybe I'll stop sharing right now. And just get back on video. And I'm happy to answer any questions that come up either from you as things that you've seen on the Google Doc.
Yes. So one thing that popped up in the Google Doc that got lost, it's actually from Noah Nelson who's from finding five and I'm just gonna unmute you know if that's alright, so you can go ahead and ask your question to Florian directly.
Hey, Florian, this is great. I hadn't known about PCI max before this this is really cool stuff. Um, so I'm actually wondering about the Sona and prolific integration that you guys have and what that kind of looks like how it works. Right?
Yeah, sure. So basically, the thing that's needed and and that the other integrations those platforms offer, and basically how it works and how it works in PC ibex is that you have to include say for Sona, the Sona ID in the link that sends people to PCI, Beck's and you just you know, you have a thing that's called ID, you know, Id question mark equals and then the Sona ID. And that allows you then there's some script add ons that you need to add in your PCI big script to save that as a value for a variable. And basically, you just keep that stored throughout. And at the very end of the experiment, you have a return link to Sona. And it's the same for prolific where again, since you have stored that value, you can incorporate that. And then you get linked back to Sona, let's say with that information. And they typically have the sort of return link where you can come back once you've completed and then the Sona system, let's say, will automatically register that somebody has reached the end of experiment and therefore should be counted as as having completed the study and get credit that way, especially for large studies. You don't have to go through manually to approve all the participations if you don't want to.
No, no, no, absolutely not. So all that PCI MCs generates is, it's just a web link that people go to the way maybe another worthwhile thing to mention just briefly is that I don't know how you guys handle it. I haven't heard about finding five before. So I'm excited because that is that basically all the downloading of stimuli. And this is especially relevant for audio and image files were a bit more data gets introduced. And if you're interested in the timing, you can download all the stimuli before people start the experiment. So they don't they don't have to do anything, but automatically, we usually do it as soon as they start reading the consent document, there was a download in the background. There's different methods of doing it, but you can set it so that trials don't start until all the resources are available. This is two pretty important advantages. One, right there is just no experience of lags and so on once the experience of the experiment starts for the participant no matter what their internet speed is because they only do the experiment once everything is in the air and it also means
Within each trial, the timing accuracy is actually really quite decent. Obviously, we're limited to what people's devices are like in terms of keyboard accuracy, which usually is what plus minus 100 milliseconds or something. But we've done a lot of response time studies, let's say, when the effect sizes are big enough, you can easily do this. And the accuracy really is good. And it's not affected by internet speed because everything gets pre loaded. And so a little bit of a tangent beyond what you asked, but they don't have to do anything by clicking on the link and then automatically stuff gets downloaded in the background and then the experiment will run.
Yeah, so that that like script download you mentioned was just on the researcher side.
Yeah, yeah, that's right. Yeah.
So there's like a few but very small clarification questions. Yeah. So I'll just answer ask all three maybe so if you have an account an ibex firm G to create a new one for PCI Beck's?
Yes, they're completely separately. How It and there's no direct transferring either, although everything that works on ibex should work on PC ibex too. So you can, you can migrate your stuff, but it's completely separate offerings by different
people. Yes. It is PCI MCs free.
It is absolutely free. And we intend to keep it that way and
yeah, right. Okay. And then there's one question that I've got a couple plus ones. So maybe Noah and for me, we'll have an impromptu kind of discussion here. you discuss the sort of pros and cons of these various platforms we this is this is probably a longer discussion and we have time for but I don't know if you guys both maybe have some just very brief comments, you could say to, you know, because people say the quote, whoever wrote it says, everybody says, quote, check the mountainside for yourself, but they said they feel overwhelmed. And this choice is sort of taxing and, you know, there are other some quick and dirty comments you guys could make about that?
Right, so I guess, maybe a good start for a competition, right? So maybe half the people in the you know, the tutorial here can try using PCI bags and half canoes, finding five and then they'll come they could switch should we can see, you know, who's experienced for more successful and got better, you know, publications and things. Which platform do we post the study to evaluate performance?
We need we need randomly assigned participants though, right? We are.
I see. All right. All right. It's good to have you here. All right. So there's some more questions, but I think those can be answered. offline in the Google Doc. So we also have maybe have some time at the end of the session to come back if people are still around and want to discuss this more. But I would like to keep going here because, you know, everyone has these diverse different methods that we're using. And for instance, I do neuroimaging and aphasia research, and I hope that there's a lot of people as well, they're interested in that. So with that, I would like to welcome real see Stark, I made sure to use the seat, which is important for the authorship. She had a PhD in clinical neuroscience at Cambridge, and she's now an assistant professor at Indiana University, and she's done a lot of work in neuroimaging and aphasia. And she is going to talk about a number of data sets and platforms that are online in terms of aphasia and other atypical populations. So with that, please welcome Brie.
William, can you hear me okay?
Yeah, you sound great.
Right, let me share my screen if it wants to play nicely, right. So great. And mcweeny had to step off for another call. But he hopes to jump back in at the end. So hopefully we'll get some insight from him as well. So, Brian mcweeny is the person who created talk bank and who gets all the funding to keep it going. So I hope he jumps back on because they've created some really cool tools. And so as William said, one of the things that I do is figure out how we can make clinical language data bit bigger, how we can use bigger data to answer some of our core questions. I've always looked at this through the lens of aphasia, as well as typical aging. So this talk will be a little bit more about that. But there's tons of other resources that I have on here too. So I'm hopeful if you are not just here for a patient, you'll still find it interesting.
You're showing your Presenter View rather than the weird. Okay, so I was a little bit weird. So just in case Oh,
no. How about that.
That's much better. It still seems to be kind of strange, a little bit on my screen. He's like, it's kind of cut off on the left. I don't know what that is from, but maybe you can just show Yeah, yeah, that's that's better. That's, that's good.
two monitors plugged in. So it didn't, yeah.
So now I have one monitor. Okay, so just to highlight. So topic. org is where all of these things currently live. And the goal of that is to bring together a whole bunch of resources from a variety of places, and put it all in one place to help those of us who are interested in big data, specifically language related data. This is what the website of topic looks like. Hopefully, Brian can jump in at the end because he wants to talk about this new thing that they've just instituted called talk bank dB, which is essentially a really great way to search the databases without having to kind of go in and manually scroll, you can actually use search terms to figure out out what databases to use, as well as some other really cool things. But talk bank has 14 different research areas that it supports. They're all called banks. And I've highlighted them here. So there's some that are based on conversations, some more dialogue based. And those RCA bank, Sam tail bank, and then class bank. They've got child specific ones, which you can see there as child language banks, many people are familiar with child's, which is also a part of the system. But there's also foreign bank and home bank. Some of those have conversations and some are more monologues. They have multilingual specific banks. You can see here a bilingual one, a second language one, and then they have clinical banks, which they continue to grow, which are invaluable resources because for those of you like me who collect clinical data, it's really quite difficult to find participants who fit your inclusion criteria. So having this Available is fantastic. So I'll show you kind of what, what the things look like in a second. I'll talk a little bit more about aphasia bank just because it's kind of the easiest to talk about since I use it the most. I do want to mention that even though there are multilingual banks specifically, a lot of these banks like aphasia bank have, they collect demographics that discuss whether the speaker is monolingual if they're bilingual, I don't know what proficiency metrics they use, but they do make it available, that type of demographic information available and aphasia bag is now available on something like eight or nine different languages, Spanish, Mandarin, Cantonese, etc. English is the biggest at the moment, but there are lots of different ones available. So it's not just English only. And so in order to access it, it's all password protected. Each bank comes with what are called ground rules. So all you to do is read those and then you email either Brian which has emails on here or to via Fromm, if you're really interested in aphasia bank, and her emails on the aphasia bank main website, which I'll show you a little screenshot of in a second, students can join, I make sure that if I have a PhD student or a master's student in my lab who's working on this data, I sponsor them as a as a faculty member, and then they join as a member if they they're interested in using this. And they also have some really great things you can use in class too, if you want to use it as a teaching resource. So this is what the main page of aphasia bank looks like. Let me see if my little pointer works here. Can you see my laser pointer? Yes. Fantastic. So on all of these banks are set up in a very similar way. So right now, my little laser pointer is over aphasia bank protocol. If you click on that, it will say exactly What data is collected, what tests are used? So for instance, in aphasia bank, they have demographics that they collect, and you can see what what demographics they're collecting all these sites. They do a neuropsychological battery, including a naming test, I believe it's the Boston naming test, really detailed repetition battery as well. And then they do a spoken discourse protocol, which is all monologue. It's not dialogue. That is based on picture descriptions, procedural descriptions, narratives like story, retelling personal life events, things like that. And for aphasia bank, all of the spoken discourse stuff is videotaped and already transcribed for you.
For reasons related to pH I and protection of health information, I'm not going to show you the whole interface because it would show an individual space. So I'll show you in a second what a transcript looks like. But I do want to mention that the neuro psych battery For instance, the naming tests are not fully transcribed, just yet, that might happen in the future. So I've been told. But one way you can do this on the aphasia bank website, is if you're just interested in browsing some of the data that's available, you would hit this browsable database that will take you to some choices. If you're interested in speakers with aphasia, you would then click the aphasia link, and then it would take you to languages that you're interested, right. So if you're interested in English, you continue there. And then it's organized by site. So for instance, Julius Fredrickson from University of South Carolina, he collected some data that's now an aphasia bank. So you'd click the Fredrickson site and it would take you to all of the participants that he collected at South Carolina. There's something like 20 sites at the moment that collect aphasia data, and you can always become one too. So just email Brian to join that you just have to edit your IRB accordingly. They also have things that called non protocol. There's a lot of interesting things like group therapy sessions that are recorded. But since they're not part of this giant protocol, they're not transcribed. So you can still search those, you can transcribe them on your own, but they're not as easily accessible. So, one of the things that I think aphasia bank is fantastic for is the fact that it's already transcribed for you and coded. So it uses clan chat clan, which is a type of coding and analysis software that Brian and his team have come up with over I guess the last 20 years now. I think he published a manual on it in 2000. It's fantastic. It's all free where it's compatible with almost every operating system. I think they're working on Mac Catalina right now. And what you get is a transcript that you can download and then manipulate using the clan software. So right down here, this is what your clan interface window looks like. That's where you tell it the commands that you want. It's a little bit like coding, but it's a very basic thing to learn. The manuals are fantastic. They kind of lead you step by step. And there's some tutorials as well on the aphasia bank website and talk bank called screencasts that will walk you through these things. And if I can zoom in, and hopefully this works, all of the transcripts are coded with gestures, pauses. The code at the word level as well as at the utterance level, they give you a fantastic amount of information about errors like pair aphasias morphological errors, a grammatic or paradigmatic sentence or utterance level. They give you a just fantastic amount of data that you can ask it to output later. And they're all checked on the transcripts quality encoding is all checked by the team at talkback.
Just to give you an example of some of the information you can get, this is called the eval tool. Plan and it gives you really valuable information like speaking time, you can ask it to do it for all stories. So for instance, you're interested in the story retelling of the protocol, which happens to be Cinderella. You can ask for information for every speaker from Cinderella. Or you can ask for every speaker across all stories, for instance. It'll give you things like mean length of utterance, mean like of utterance and words, or you can specify if you want it in morphemes. It'll give you total tokens, things like that percentage of parts of speech. It gives you an enormous amount of data and it's a fantastic way to probe language structure, but you could also use it to do language function, right if you wanted to go in and code things related to story grammar, cohesion, coherence, main concept analysis, core lexicon, all of those, you can use this data to do they're not coded specifically for those more discourse level analyses, but they're free for use once you're kind of a member of this team. So then I thought, well, William suggested I talked about kind of how I've used this data as maybe a probe some insight, or give some examples. So this is not official bank data. But this is the chat clan coding, just to give you an example of how this might be used. So will you mention that I do have a degree in neuroscience. So I approach everything related to language from the point of view of kind of what's going on in the brain. So what we did with my colleagues at South Carolina, is that we went in and we coded a whole bunch of picture description, spoken language from individuals who had had a left hemisphere stroke. And we coded a whole bunch of paraphrases or word level errors. And we looked at the brain damage or the lesions that were associated with semantically related errors or For anemic errors, for instance, and having chat clan coded, doing our coding, making sure we were reliable, but using that system allowed us to get this information really, really easily. Patient bank, talkback do not come with brain data attached to them. That gets a pie in the sky dream that Brian has. Maybe someday that will come to fruition. But you are able to probe more deeply into behaviors from a very large data set, which is now what I thought I'd give some examples of so how I've used aphasia bank, I did publish an article most recently and American Journal of speech language pathology, I think it was 2019 out you can see there on the left, and then my colleague Julia and I are trying to currently get this thing on the right, published at some point soon, fresh fingers are to 2020. So let me give you a little insight into what I was doing there. So in the AJ SLP paper We were trying to figure out essentially, do speakers with and without aphasia show tasks specific, realistic differences as in linguistic microstructure. So we were really looking at some of these variables you see here such as mean like of utterance words per minute verbs per minute or verbs rather, it's rather token ratio, those types of things kind of singular measures of discourse that one might be interested in that tells you something about the quality of the structure of the language, very much kind of at that
structure level, not so much functional level. But the cool thing we were able to do with aphasia bank is we had an enormous for aphasia, sample size, so we ended up having 90 individuals with aphasia. We got 80 for age and education and sex matched controls also from aphasia bank, and we were able to really directly compare how individuals differed on these kind of languages. outcomes across tasks. So across narrative tasks versus expository tasks versus procedural tasks. What we wanted to do after that is expand a little bit and do a bit more of a sophisticated analysis, which is what Julie and I were working on for the past year or so. And this is again, taking all of the data available in aphasia bank, which is reaching close to 300 speakers with aphasia at the moment, over 200 controls of various ages. We wanted to again look at kind of linguistic structure by task. But rather than do singular metrics, like mean like of autoruns, right, whichever one kind of disagrees on what that truly represents, why not just compare all of this data in a multivariate way? So what we did is we took a bunch of the parts of speech, all the parts of speech data that you get from the cloud. Put tensor usage all these interesting structure variables, and we wanted to model them in a multi dimensional space and then reduce down our space so we can kind of interpret it. And we wanted to look at the difference between tasks and again that linguistic structure. On the left you can see just a comparison of controls. Whereas yellow is a procedural task where people describe how to make a sandwich. Cinderella and important event there in the middle are both narrative tasks. One is autobiographical. Tell me about your life or an important event. And one is a story retelling which is the Cinderella story. And then you have expository tasks one is a call to Cat Rescue, which is a picture of a cat being rescued. And then window or broken window which is a sequence of pictures that people are asked to describe. We wanted to see if there was a difference by task and you see it in for linguistic variables of interest. You see it for good roles and you also see it. For speakers with aphasia, it's probably easier to look at this bottom graph here, which is just kind of narrowing down by aphasia type if you're interested in that, and seeing that there's some task similarity in terms of linguistic structure that breaks down there, we actually go into more detail and we look at severity as well. People with aphasia, various severities. So that's something we're also interested in kind of looking at. So that is how I have used aphasia bank. Many, many publications are out there using aphasia bank, many of them are hosted on the aphasia bank websites, you can get a great idea of all of the interesting ways to use this. But I didn't want to just give you things about aphasia bank, I wanted to leave you with some other resources as well. So you want to screenshot this. This is a great one. I'm sure this will be recorded for later. But Oscar is a fantastic resource. It's I think, hosted at Northwestern lots of speakers. data available there. And there are some resources for data collection like iPod, which is English words and pseudo words. They've also got things like clicks, which I believe is cross linguistic, which others at the bottom. But then there's also been some really fantastic fantastic things available in conversation. My personal favorite is the Carolina conversations collection. They have two cohorts of adults here, one cohort, kind of talks about their chronic disease. Another cohort talks about the cognitive impairment. They rotate who they are speaking with. So if it's a student versus kind of a known peer, and they also have longitudinal data on some of those speakers, which is really unique, that's not typical dementia bank, which is another part of talk bank, if you go to the pit database, it's one of those kind of by site organizations with And dementia bank. They have a lot of longitudinal data on a picture description, a sentence generation task, I think word fluency as well for individuals who have various types of neurocognitive disorders, including mild cognitive impairment, Alzheimer's disease, and then I think they have a few primary progressive aphasia as well. And Johns Hopkins also has a database thanks to RG Hillis, which has some primary progressive aphasia, also included on dementia bank if you're interested. There's a really great blog, I just left here on the bottom. It's a WordPress site. Hopefully it lives there forever. But if it doesn't go copy it to a Word document somewhere, but it has fantastic resources for open source language, either data like aphasia bank, or things to use in your own data. So you'll have to create your own things. And just a little plug before way it makes me stop and answer questions. If you're interested in anything related to smoking.
Language and aphasia specifically, please join this working group but kind of trying to improve the evidence and work together to make these databases even bigger. So that's my little plug. And I guess I'll take questions unless Brian, Brian joined back.
Sure. Yeah, I'm here. Yes. Drama video. All right. Great. So I don't have to say all that.
Obviously, you're, you're, you're the man the leader.
So you know, that once you get to be 74, we're very happy to have these young people are gonna take over Pretty soon, I hope. All, you know, all these types of data. I think eventually, we're going to have to have so many types of database like this that we're going to have, you know, lots of different input not just aphasia, but a lot of it is child, I just want to add a few things. So, first up for the non clinical, there's no password required. So trials by Ling bang someone totally open Also you your obviously your work depended a lot on the fact that the data would tag morphologically. You didn't show that in the particular ones you showed. But, you know, from all of the Asia bank, all of all of the English trials out of Spanish, German, French, we've automatically tagged by parts of speech and then by a grammatical dependency. That isn't true. I think we have 12 languages, we do that way. So that's, that's an important thing. Um, in terms of dementia bank, I think it's very interesting to note that there is now a challenge at interspeech for the best computer program that will be able to differentiate mild cognitive impairment from normal from, from really offensive, real dementia. And we know something like 150 computer science labs around the world that are, you know, basically using these data to make the most wonderful algorithms they possibly Can so that's a very speech technology type of world. There are also speech technology things for aphasia that some people, particularly for a praxian that you know people are getting into. And then within the child language area, there's a project called fun bank, which is really looks at young children's of phonological productions and it has a program called fun that has integrated within it all of product. So you can run pry inside fine and do all these fantastic you know, pitch extraction and following jitter and vocal fry, or whatever it is you or IPA or whatever you want to do in phonology. So I think those are important things. Um, let's see. One thing is, I don't know if I can share my screen here. I can't share screen okay, but main site, there's a there's a link to a tutorial page that I think would be very helpful to people In the tutorials are all screencasts for all these different, you know, facilities inside of them.
Yeah. And and you know not to push too well. So there's two other things. One is that we really do want to move more and more to linking to the brain like, just like you're doing great. And I right now I'm, I think the best approach would be to go to open neuro. We're working like with a project over at Pitt with Julie fees that will be taking aphasia bank data, and also we'll get scans with mostly talking structural here where, you know, it might be somewhat functional, but mostly structural, including white matter. And I think that is very important. So anyone who has ideas on on how we should best do that. My idea would be that we could keep all the language data on talk bank, but then link to open neuro for a specific data set. So you know, I think hopefully that would work. But we haven't really done that yet. And then the other is is talkback database talkback dB. On there's a link from the talkback page to talkback TV. Right now, it only goes to child's data. But really, by the end of the week, we're going to be we already have pulled in all the other databases. So aphasia bank, dimension bank will all be searchable on in that system. You know, you basically specify what data you want. When you want types or tokens, you can use CQ L, which is corpus query language to look for sequences of grammatical structures. And then you can get all the matches to those queries in a downloadable CSV, which you could you know, then pull into our or, or into Excel or what or whatever, but also we also have a package inside our where you can pull directly using our query so that that I think is going to be a really important thing. It is. We haven't got everything out there on the web. So I would say try it next week.
Okay, so that's enough to add. I think there's a few questions you can answer better than me. I'm just reading off of your questions.
Oh, I was just gonna read off of your questions. Are there any banks for signed languages or...?
No sign language is a really tough one. The problem is not not the nature of the sign. But problem is the privacy concerns are just extreme. It's been really difficult. There are there are some banks But no, yeah, it's it's really a problem.
Can students use it if they do not have a faculty sponsor?
They use first of all the open databases are open. And you know, it's really easy to get you this is I know, it seems like a barrier but it's really easy if if someone doesn't have a faculty sponsor. I mean, I guess I'll sponsor them you knowMostly what we reason we want the faculty sponsors. We we think students have much more energy than their advisors. And but we want to get those advisors somewhat involved to, you know, to make them aware of so that, you know, there's sort of a sort of the history of the whole thing. Yeah.
Yeah. There was a question that kind of disappeared, but just how consistent codes are across lab sites? If you want to speak to that, I can speak to that about aphasia bank.
Yummy. Well, I mean, the coding system of Chinese totally everything is in chat. So that's, uh, that's nothing. There are no other databases and spoken language like that. Where everything is in the same transcript format? Yeah. Yeah. So it's, it's, we try and keep it consistent across a lot of sites. You know, reliability is done at the lab, for the most part, you know, reliability coding and whatnot. Yeah, I mean, obviously, when you talk about coding, there's a difference between transcription and annotation and coding. So I think the transcription is really really tightly specified by chat, but then people want to code additional things. You know, that's the project specific and there are methods inside the clan program. Graham's for tracking that. But we're not, you know, we don't have a discipline. Everybody needs their own codes. Yeah.
I mean, we use we kind of make up a few of our own because we're interested in.
Right, but those are add ons. lets you have a basic transcript.
Yeah, right. Exactly. Yeah.
But I would say no. And then be clear, we really prefer to have transcript link to media. So all the older data don't have that. But we've really been pushing to make it so that the media, either audio or video and allow music video.
So yeah, I didn't show it just for PHP reasons. But yeah, when you go to the browser, the database, everything is locked, the transcript is actually locked to the video, which I think is fantastic. So you can, you know, go and click anywhere on the transcript and it goes automatically to the video and you hear that person, which I think is a really great thing that talkback does.
That's also true for the talkback dp. So once you get this output, it still has the link to every utterance that match back to the transcript that you can just go right there. Yeah.
Any other questions?
Sorry, my dog is barking just ignore.
Bree got most of those questions in the Google Doc but there was one. Are there any examples of children with TBI or aphasia?
Children with children don't get aphasia is no such thing. Children with TBI, but we have we have a lot of TBI. No, we don't. Thank God. There's very little money. But we do have some young people with TBI down to I believe 12. So, but but really TBI typically has motorcycle accidents starts around 30. Yeah.
Okay, so thanks, guys. I am sure we have plenty more to discuss, but like to get with that. Yeah. And I really appreciate you both doing this. I'm going to try what we did in the last session. So I will very soon. unmute everybody. And then hopefully we can clap to thank these people that get jumped in at the last minute to present this awesome stuff. So okay, get ready, guys. Kate 321
Okay, looks like my dog was not as upset about this as the last time. So that's good. He's learning. Now I'd like to welcome Josh Paskowitz. He's a PhD student in neuroscience and psychology at Indiana University. And he does work in the computational cognitive neuroscience lab. And he's got a lot of experience in using completely publicly accessible online neuroimaging datasets. And again, as we mentioned, online data collection is good, but just getting data that's already been collected is better. So this is something that you modulo, all sorts of other concerns. But anyways, I just like to welcome Josh and if you can take it away, that would be great.
Cool, yeah. Hey, I'm Josh. Josh baskets. I'm a student here in Indiana. And I just like to thank Think thank you guys for the opportunity to share my share my knowledge share all this data that I've looked at over the over my years in grad school. So let me set up the the screen share thing here share my screen, share. Okay, oh, this and then let me try to
Can I get present?
Okay, cool. So, um, yeah, I kind of just made a whirlwind of a tour of some freely available data across the web. Just so just some background about me. I do brain network. I looked at structural brain networks, functional brain networks, we merged structure and functional brain networks. And over the course of my studies, there's so much free data online. I haven't Fortunately, I've haven't had to collect a lot of data that consists of a large chunk of what I've done. So these are examples. Some of the data that I've used now, I'll describe to you some of the stuff. And so I guess I would consider myself a research parasite on this is like a phrase that has come up recently about people who use other people's data and make new hypotheses about it. And we totally depend on other people collecting the data. And it came up as a negative thing to begin with, I think in a New England Journal of Medicine, commentary, but we've since owned it our community of research paradigm, parasites, so So yeah, so I'm very appreciative of everyone collecting data MRI data specifically. So I'm not affiliated with any of these resources that I'm about to show you. I'm just going to give you my the user level experience and some practical practical experiences with this data. So one of the one of the places that has a ton of data is picture you can google it pretty easy fix, share calm. So sometimes if I'm bored, I just just google MIT one was in state anti rage system key terms. And also, when you're, when you're publishing your own study, an MRI study, and they ask you to share data, this is a great resource to put data, it's they give you a pretty large amount. I don't know exactly the numbers of gigabytes that they give you, but they allow you to put a lot of data here. And so there's been some stuff written about open data, Russell, Paul drac, in one of the big proponents in MRI, about sharing data. So we're going to go through this pyramid a little bit. So the top of the pyramid is the Like, what he calls potential for re reuse, like. So the top of the pyramid is like results, someone's analysis. neuroscience is a super cool website. I'm hoping I can actually preview some of this stuff. So if I click here, yeah, I'm still screen sharing. neuro sent is a great website if you just want let's see, when I do a analysis of just the key he meta analysis of ICA results across neuro imaging studies, neuroscience has collated coordinates in a machine learning manner. And they have these maps of activation for you. And you can actually so if you wish, I just clicked the link for language we can use any search term here, and you can, you can, you can actually download this map for your own purposes. So So you can doubt I mean, you're interested in default fault network, for example. Now, this is a magnetic map compiled for you, and you can download it here. So it's a it's a resource for free. Brain map is a similar one, this is a, it has coordinates. So you can put your, you can extract the coordinates from this one neural vault is a place where you can download and upload maps on thresholds, disco maps from your studies. So it's kind of kind of like neuroscience, but this is not just the
not just the results of the study, but like sorry, not just the significant clusters, let's say, but the whole on thresholded map. Okay, so one of the big if you're there, neuroimaging researcher and you just you just want data available human can ACTA project is like a one stop shop to get a bunch of data. So Human Connectome Project. The main study was over 1000 subjects at a 3d scanner at Wash U in St. Louis. And they since collected around 180 more subjects up in Minnesota with a 70 scanner. And they really went extensive in their imaging protocol. So this huge project, this part of it wrapped up the 1000 subjects, their age, they're pretty young, I think it's like around 25 to 35. There's a twin structure here. It's it's a really big data set on the level of terabytes the raw data, you can access it through. So this data set to get access they have a you have to sign up so it's not a point and click and then you get the data automatically. You have to do a little waiver but it Pretty, pretty easy to do. I signed up for a while ago and it took a week or two but, and I had to be underneath a pie, of course. But once you go once you have good access, you have access to pretty high resolution T one weighted image and to two weighted image and point seven millimeter isotropic voxels. The resting state it's sampled at a tr of point seven seconds. So and that's for an hour they have for 15 minutes sessions. So that's pretty cool. You're into functional network reconstruction such as, like I'm interested in it. They have a lot they put people through a number of tasks. The tests aren't so in aren't so deep in meaning I would say that I mean they're not as they kind of were just doing a wide survey tests in my opinion. Say I'm working memory gambling, motor language. But they still like the resting state functional scans, it's pretty highly sampled. So it's pretty good quality data. And then diffusion imaging, so you can run your favorite tractography. With these data. It's a pretty good diffusion acquisition. The coolest one of the coolest parts about this project is that they pre processed the data for you with their minimal pre processing protocols. So you don't have to worry about they did a lot of technical advancement. And you can just download the results of their pre processed data. So they've already gotten for you the movement parameters, let's say and they have already normalized it to MSI space. They actually even created their own proprietary format called safety, which has both the surface data projected to a cortical surface in a standard space and subcortical structures and letters Say you're not even interested in all the data, subject level data, they actually have pre processed group average, Cohen's d activation maps for their tasks. So if you just want to look at a map of activation and kind of just do your work from home during this quarantine stage, and you can also download that. So that's, that's a pretty useful I did in this paper, for example. And the cool thing, so HCP is such a pervasive, I mean, we probably heard of it most, a lot of us have. They have, there's a lot of people that have released their versions of the data. So here's some examples. I've, I've included, links to where people have processed 1000 structural brain networks, for example, you can download, you can download the 70 tractography data time series at the bottom here. So there's a lot of options now, so at CP, you have to sign a waiver. Indi, the International neuroimaging data sharing initiative is just pointing collect for most of the data, you just go to this link right here. And you can get access to a lot of different data studies, I'd say that this is, in my opinion, one of the older data sharing initiatives and MRI. So there's a lot of different studies, I would estimate over 30 or 50.
And for most of it, you just go and click the data and you start downloading to your computer so it'll take a while. Some of the datasets you need to register with nitric network is pretty easy. It's a neuro imaging and informatics resource. You just had to sign up there. So just that they have your email. But here's a view of some of the some of the samples that they have. I mean, I could just click but you can see that they tell you I mean, there's 200 scans here. 28 scans here. 25 scans here. There's a lot so I don't think anyone is going to struggle for general data, again is not so specific. But if you just want to play around with some data processing, indie is a great resource to look at. I have some highlights from MB that I like. So Anka enhanced is a particularly good data set. In my opinion, it's a lifespan data set. So it has data from 10 to 18 years old. And they also have deep phenotypic information, the deep the information, you will have to apply for to get a waiver to get to get all these tasks and the physical measurements, but once you get that, it gets pretty deep. I didn't mention the HTTP also, it's pretty deep phenotypic information. By this a large autism data set 1000 combined autism and typical controls, mostly resting state data. ADHD. Slim is a longitudinal data set. from China. MPI is another data set the brain body mind brain body data set from Leipzig. This also has pretty deep phenotypic information and there's an insurer scientific data article. Atlas is a is an openly available data set with lesion segmentations from from USC USC, the Southern California USC. So that's cool to check out if you're into some lesion segmentation is already done if you want to test your new lesion segmentation software. How about some clinical data while there's so these ones you definitely have had aadmi and P PMI, Alzheimer's and Parkinson's data sets? They're huge over 1000 and over five 500 each of these you do have to go through a little bit of a lengthy application for but they're just no they're out there prevent ad they recently wrote a piece Print and I don't really know much about it, but it's available. It's available and abide This is they even have more autism data, movie data. So cam can is 600 people, over 600 people from Cambridge collected at Cambridge across the lifespan and they have rest in movie fMRI and a good amount of behavioral data. This one you have to apply for as well, but it was pretty easy. They just took a while to get back like a signing the form it was very, didn't take much but it took them a while to respond. Study forest is around 20 subjects but they watch Forrest Gump in the scanner. So that's pretty cool if you want to, and they also this data set in particular, they did a great job at annotating the forrest gump movie. I think they presented this at a group from Germany. So they they have all these annotations in German, of like what was shown at the time. And then Jim XP at Dartmouth has openly available fMRI data from someone watching Raiders of the Lost Ark. The Healthy Brain network serial scanning and initiative also they 10 people that have watched Raiders of the Lost Ark plus 10 sessions resting state, and also like watching movie trailers. Healthy Brain networks. So the other one was like a, like a test phase for this larger study. And this has a lot of children in it. And they watched a movie I think they watched Cloudy with a Chance of Meatballs. I don't know much about the eg but they had that available here to this one. I'm not sure if it's a point and click and download or if it's a waiver and then deep data on single subjects. So my connectome is Russell projects dataset where he scanned himself 60 times within a year, and he actually has saram data on his blood And he, he has some tasks resting state, so to one person but really densely sampled if you want to and they actually pre processed data through fMRI prep available for this one. Simon is a guy who got scanned at many different locations around Canada, I think
this is Todd constables dataset where he could scan 30 sessions within a 10 months. And he was a couple movies he did resting state. That's a double download at midnight scan club. This is been a DISA data set that was recently used that that 10 people 10 sessions, and is pretty high quality data. The Grattan lab actually has at this link in the slide they have data available to download like already time series available To download for this data set, so that's helpful too if you don't even want to do the pre processing, and then we even have non human so monkey data available from the group in New York, the is available. And then Allen brain analysis is not MRI. Well, there's some of that. But this is a whole Allen brain analysis is a great resource for mouse brain activity. Open neuro. So this is all pretty all overwhelming, open neuro might be a little bit more overwhelming. And I just want to click here really quickly, about open neuro I mean overwhelming in a good way. Open neuro is a place to where people deposit their datasets. And they have a philosophy of storing their data and the bids format, meaning that it's just ordered as well for you. But you can click a dataset here, I just click this random data set, and then it takes a while to load but you'll see that the These data are available and cannot be. So here's the study. Usually there's a readme, some acknowledgments if you reuse the data they asked you to cite here, and then I'm just on the right side here, clicking through each subject and you can just see the day that they have anatomical data functional data, and you can download these in bulk too. So it's really helpful. You can browse this website, you can see if you have people with some coffee their day so that's really useful. I just have some pictures from it. I'm finally so this is this is a data this is an initiative from the pistil lab here at IU. I can't get too in depth into it because it's it does a lot but this platform can in the first place. form, I just recommend going to the link and looking at the booking that what they have to say here. But basically what you can do is you can they have the computer setup for you, you just click the kind of processing, they do freesurfer they do fMRI prep at this website. So it's pretty cool, I'd recommend checking out the resources. And then finally, since I'm a network guy, I actually have, if you go to my GitHub, I have compiled a whole page of openly available network data for you to play around with. So this is just my good job. I have non human animals, I've human animals, network data, so that's available. So and then finally, if you want some help, go to neuroscience calm I really like this resource. So that's, that's it for me. You can hit me up on Twitter, or I'm going to share this I'm going to share this PowerPoint with The links in it. But in this time where we're all at home, there's definitely a lot of data to, to play around with. We're not, we're not alone here we have a good community of very generous people. And I think them as a data parasite myself.
Thanks, Josh. That's fantastic. And I just want to particularly thank Joshua for doing this because I think literally, he was asked maybe last night at like five or 6pm Central this together. So thank you for doing that the last minute that fit perfectly. So as Josh said, he'll send the PowerPoints so we can upload that and you can access all those links and resources. So I'll just ask a couple questions, if you don't mind from the Google Doc. So the first one is asking for the resources. So yeah, we'll make sure that those get linked. And then the second question is more general about so this person said I'm not familiar with using big data or data sharing from other labs from your experience, how is publishing with it? Do reviewers criticize it? Is it pretty welcoming literature
sorry. publishing the
Yeah. So what's, um, you know, if you were to compare publishing a controlled experiment versus data that you, you know, got from somebody else, how does that fit with reviewers and journals, you know, is that?
Yeah, yeah. That's a good. That's that's a great question.
In terms of, yeah, because I'm not the one me personally, since I'm not the one collecting the data, I actually do feel compelled to share the derivatives of my study. For example, I had a I published on that nk dataset, and I actually uploaded the networks that I generated, so that I'm doing my part two and sharing the data. This was also though, there's some trickiness of you, in any of these situations, you should consult like the documentation from the from the original data source and make sure you can share derivatives for the nk it's it's one of these datasets where you can kind of point and click so it's a pretty open data sharing
agreement that they have, let's say, but other it would be a case by case basis with other data sets, for example, add knee, which is like clinical Alzheimer's data, you would have to check in with their documentation to make sure you can share the derivatives. But other than that, yeah, I mean, you didn't collect the data, but you're still using it. Obviously, you shouldn't share the raw data on someone else's behalf. But I would say it's a good gesture to to, to share what you've done with the data. That's unique on your end.
I think you're muted.
Sorry about that. Yeah. So I think that another part of that question is really just you know, is that harder to publish using borrowed data or shared data in your own data.
Mmm hmm. I this is just the perspective of me as a student. I think it's, it's harder to generate hypotheses. I mean, in terms of you have the restriction of using data already collected. So your hypotheses might not be able to be as, as you might not be able to tailor the data to answer the exact question that you want. How so in that regard? Maybe it could be harder, however. I think. I mean, from my perspective, as a grad student, I don't think I don't think it should be looked down upon or anything from from a journal editor, let's say have they there, be reluctant to publish it? Just because it's not your own? Correct. I wouldn't I haven't heard any stories about that. Or it's hard to know, I don't know, editor anything.
Right, right. I guess we should all make sure to browbeat people, you know, to make sure that they're okay with doing this sort of thing, because especially now, right. It's not like we have options anyway. But it's also really expensive and can be difficult to collect. You don't have to enter where you are. Um, another question that just got from the chat was, I'd like to know to which extent data or raw data or. Right, yeah, so like, I guess the question is really about, like, you know, if data have already been processed, how hard is that to work with?
Yeah, yeah, well, so most of the data that I was in this that I shared here, they're all a nifty format. There's no daikon. So it's a little bit above raw. But like for HCP, for example, that's pre processed. And they have raw but I wouldn't actually recommend downloading the raw because it's it's terabytes in data, the Open, open neuro that huge website with all the studies, I'd say they have over 100 studies on there. That's nifty level data. And that's not that's not pre processed. So that is, but you also should do whenever you're working in this situation, you should do the legwork to make sure it is what you think it is. I say this because for example, on some open neuro datasets, some individual data sets, they've school stripped the T one, for example. And so you wouldn't want to apply SPM normalized to a certain Atlas, if it was brain on or sorry, skull on or skull off. So that's kind of the things you have to just check out with, with the documentation. So all these all these resources should be well documented. And maybe as a, from my perspective, if it's not well documented, it's probably not worth using. So because then you don't know if they've pre prod like if they've nuisance regressed the data already. You don't want to redo that, for example. So stuff like that. I mean, it's on a case by case basis, obviously, but, but generally the resources I have here, that they're well documented, they'll tell you what's raw, what's pre processed. But most of it is nifty format, unprocessed.
Cool. So another quick question is, is there a convention for citation of data? Like you just cite the paper? Is there some other thing you have to do?
Yeah, yeah, that's that's a fantastic question. opener. Oh, again, I keep on coming back to this. They asked that you cite open neuro, and I think they have a creative I think they forced everyone. I think if you're gonna post your data on open URL, you have to accept their creative commons license.
You did last time. Right. And that's what they said they make sure that it's gonna be creative commons.
Yeah, yeah. And, but add me, for example they have, when you sign up for adney, the Alzheimer's data or imaging initiative, you have to you actually some, in some cases have to add them as a last author. So that's a totally different level of acknowledgment.
Um, that's a pretty big jump there.
Yeah, yeah, that's a pretty pretty darn big jump. So but I would say, I would say it's just, uh, if there's no explicit, if there's, if there's no agreement, if it's really open, for example, indeed, I think it's still a good gesture to put it in acknowledgments when you're publishing. Just Just so that your readers are aware. And just so that they, they at least have some sort of metrics in terms of when people Google the data center, or PubMed or something like that. It kind of shows up. But there is no standardized i think i think there are people are pushing for standardization of assigning credit to dates. I don't think there's consensus. But I would say, I would say the best practice is to when in doubt, just put it in the acknowledgments at minimum.
Yeah, that seems like a great option. So another question here. This may be a little off topic, but has using shared data opened up collaborations with other researchers or sites? Or do you just typically work independently? Once the data is available? You just download it or do you more work actively with those researchers?
Looks like either me or Josh is frozen right now. And not sure what it is.
So I'm just asking if people can hear me more Okay, so it looks like Josh is frozen.
So is Josh frozen, please indicate.
Okay, all right. Well, um, you know, Josh may or may not be able to hear us, but this is being recorded. So, you know, he'll be able to go back to that Google doc and answer your questions on that, hopefully. But at any rate, you were the judge can hear us now or whether he will come back and view this on its own time. I'd like to do again, the applause. So again, I'll do the same thing. I'll count down and then please just give your applause to Josh. They can forgive us together at the last minute and helping us out. So 321
awesome. Okay. So we're at two o'clock, but I would like to just stick around if people are okay with that for just a few more minutes and just basically the idea was You know, we've talked about a lot of interesting things publicly available data sets, methods for collecting data online. But again, maybe the easiest way to go about this is someone published a paper, you realize you read that paper and you're like, I could probably use that data to do X. And you can just ask them for the data. And I think it's often the case that people are willing to share that. So basically, I just wanted to bring Bree back. And if there's anyone else that has a lot of experience with this, please just go ahead and indicate, you know, in the chat if you are interested in sort of talking about this, but I thought that Bri would be good because we have a lot of experience in asking people for data. So again, like I said, if you are interested in chiming in here, please just go ahead and
indicate that in the chat. I'm going to actually looks like Josh is like, thawed so I might try to bring him back.
Just in case Yes. Hey,
I am sorry about that
wars. But maybe you want to chime in on this topic too. I'm not sure if you're new with directly asking people for data. But at any rate, I think this might be a good place to get a lot of questions. So I don't know. I mean, from my experience personally in graduate school, I mean, I started off and I probably could have done this, but I think I was very shy and did not want to just ask somebody, I felt too brazen to do that, which I didn't do. But I think in retrospect, it wouldn't have been a big deal to do that. So that was my experience. I don't know if other people have that sort of experience. But if you have questions about that, or concerns, please just go ahead to go to the Google Doc. And just type your questions or comments, if you want to add comments. And again, like I said, if you have some experience, please chime in here. But I don't know maybe Bree, could you maybe comment on this a little bit?
Yeah, I think this is really tricky when you work in clinical data like I do. A lot of
It's still kind of gray territory, whether or not lesion two brains constitute protected health information. So it depends kind of where you are. I know in the UK, so I did my PhD at Cambridge. So I was based in the UK. They their ethics basically state that you can't share any brain. That's not quote typical, because it does contain PhD data. Here in the US IRB is kind of vary depending where you are, they're more stringent at a hospital or Medical University versus not a Medical University. But something that typically has worked for me is just asking for standardized lesion masks. So masks that are already standardized to something like the M and I space to which you can then do some analysis on and most people or many people are free to do that. But as someone who collects clinical data, it is a lot of money and time to get this information.
And it is difficult to share. But I think it's always worth an ask that I my caveat is to let people know why you wanted exactly what analysis you want to do. And in the first email to say, let's work out authorship criteria, because I think that puts forward. You know, I respect that you collected this data I want to give you, whatever you think that you know, you need in terms of getting people to give you credit, and that typically has worked out the best for me, I would say, right, and that's a great point that you make, right, I think, you know, if you author if you are offering authorship, I think pretty much everyone is going to want another paper on their CV and citations of their pre existing paper. So I don't think so is something that people are, you know, I mean, I can imagine situations, I'd have known some people that are more protective of data, but I think once they published a paper already on it, for example, then I think that's it opens up a lot more flexible.
Yeah, I'm trying to think of filling out some more thoughts on this. Does anyone else want to chime in here? Bree can go ahead if you want or someone else has a question or or like I said, if anyone has an experience with this, please feel free to add something if you'd like. So, yeah, someone asked a question on the Google Doc with regard to authorship, with the data collected go last or just somewhere in the middle.
For me, it's kind of dependent on the preference. I think in neuroscience, typically, the most senior author is either first or last. So a lot of times people sharing are fine with being in the middle because they they collected the data and according to like APA agreement, data collection alone doesn't necessarily constitute senior authorship. It kind of depends on what authorship criteria you're honoring. But I try to be very forthright with that and saying, you know, this is what we typically follow in terms of APA
You want to talk about it? But typically I see them at least a neuro falling in the middle. Josh, I don't know if you have had a different experience. I know you said aadmi likes to be senior author,
but I don't know if you've had any other different experience with that.
Yeah, so here at IU I've, I've actually helped pre process data for other labs. And so as a data preprocessor, where I'd actually take the raw data and pre process it. I've been a middle author for that contribution.
But never have I been as a data collector of the only thing that I can think about is at USC in California. They have Paul Thompson's imaging genetic center, they have the Enigma project, and they're, they collected across they collect thousands of stuff Subjects so they have a 200 person author list. And everyone who's collected data and done the basic pre processing to aggregate the data is in that middle 200. And then there's a block of senior authors at the end who are so maybe like three senior authors, and they're in a little non alpha, but that middle block is alphabetized. And the last block is not alphabetized. And then the first block of people who did the first draft is, is they're there. They're in a first author block. So that's my only experienced that this is, in terms of the that's just a bunch of data providers. And not just though I hesitate. I mean, they're providing a important part of the Enigma project in those instances.
Right, right. And I think an interesting point to bring up here is if your research is federally funded, there is in some sense an obligation to disseminate your research and your data in some sense.
Right. So I think this is something that for us to think about, it's something I've never really done. I've never publicly posted data at the same point, you know, is there some obligation that I have, because my research is NIH funded, you know, to make sure that that's available so that the maximum, you know, gain can be had from that data that was collected. So it's a, I think, an interesting point to make, you know. Yeah. Does anyone else have any questions, comments, concerns? In one on one I just like go back into finding five and just like, you know, practice? Yeah, I guess not.
So, if that's it, then I just really want to thank everyone who participated is still here. Brie. Thank you for just responding to my email yesterday. And Josh, like I said, for coming online, and then know from earlier, that was really an amazing presentation, and everyone else that partook as well. So we're going to make sure that these recordings are posted. I'm not sure exactly when, and how that's going to be done, but I'll make sure to disseminate that information. And maybe the case that I post all these recordings on the CUNY conference website, maybe along with the recordings from the conference itself, that might be a good solution. But we'll be however you found out about it for today, then you're going to find out about it for the posted videos. Please feel free to share those. Um, I think that's pretty much it.
I don't know how to end these things. I don't know. But anyway, I'm just gonna say goodbye. Good night. Good luck, Live long and prosper. And, you know, like I said, feel free. You can ask us for data. I know that I'm willing to share anything that I published. If I can still find the data. I'm perfectly happy to share that with you as well.
So All right, thanks, guys. Goodbye. And thanks to the CUNY organizers for making this happen really because they hadn't. They hadn't followed through and actually hold CUNY then this workshop would not have happened either. Okay. By saga soul stop coding
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.97/warc/CC-MAIN-20210506114045-20210506144045-00103.warc.gz
|
CC-MAIN-2021-21
| 176,735 | 435 |
https://searchenterprisedesktop.techtarget.com/blog/Windows-Enterprise-Desktop/Say-You-CAN-use-Gadgets-in-Windows-8-8GadgetPack-to-the-Rescue
|
code
|
Thanks to old friend, occasional co-author, and MS Security MVP Deb Shinder, I’m now aware of a snazzy little utility named 8GadgetPack that restores those ever-so-handy-and-informative desktop gadgets to Windows 8. For those who don’t recall, gadgets were stripped out of Windows just over a year ago because of security concerns, more or less in synch with the release of Windows 8 (here’s an undated MSDN article entitled “Desktop gadgets removed” that provides MS’s official rationale for that decision). Given that Ms. Shinder is a ten-year recipient of the MVP with a focus on Enterprise security — see her bio for more details — I feel even more comfortable adding back Gadgets to Windows 8 than I did before, in stubbornly refusing to give them up on Windows 7 (all of my surviving Windows 7 machines still run them).
The screen capture you see to the left of this text material shows what the default install of 8GadgetPack looks like on my production Windows 8.1 PC. It appears in the old-fashioned (but very handy) fenced-in sidebar area reserved on the right-hand edge of the screen that was introduced with Windows Vista, and removed in Windows 7. Those who elect to put their gadgets elsewhere, or do away with the fenced in area completely, need only right-click inside the sidebar and manipulate the program’s Options settings to arrange things more to their liking. I like these defaults (at least for now: it’s still only my second day with the program installed on my Windows 8.1 deskop) so I’m going to leave them alone for a while.
I’d more or less resigned myself to living without gadgets on Windows 8, resorting instead to a handful of other favorite tools to glean similar information from the OS to what’s show to the left of this text. But with the ability to regain access to both Network Meter and CPU Usage (both from AddGadget.com, and my two very favorite Windows gadgets because they show me what my PC is doing locally and on the network at all times with only a quick glance) I’m happy to put those items back on my Windows 8 and 8.1 desktops. The information they provide is simply too useful and informative to live without, when I don’t have to. And with dual layers of firewalls around my local network, and reasonably strong endpoint security software on all of those machines in addition, I’m willing to shoulder the security risks of compromise through those gadgets, given that my understanding is that it’s pretty minimal under these conditions.
I still need to find a reliable source for one more old favorite gadget, simply known in its own information block as the “Shutdown Gadget.” It provides a simple control bar with three icons: shutdown, restart, and logout current user. Like the other gadgets I use, it offers great convenience and easy access to functions I like to keep immediately at my fingertips (that goes double on those Windows 8 systems I own with touchscreens, where a fingertip is all that’s needed to activate those controls). By tracing it back to the name of the gadget file itself on one of my Windows 7 machines, I learned that it is named shutdown_v2.gadget, and remains available for download from Microsoft. The last time I went looking for this, I found several sites that purported to offer this item were in fact offering malware-infected payloads. The original from Microsoft remains entirely safe, so feel free to use the foregoing download link yourself, if you like.
To those who never really got into gadgets, I apologize for the “happy dance” tone of this blog post. Personally, I have always found some of these simple and tightly focused programs quite helpful, so I am delighted to see them return to my Windows 8 and 8.1 desktops. If this has been nothing more than a big ho-hum for you, after asking “Why are you still reading this?” I can add “So sorry for going off about something so apparently insignificant.” In my own case, however, the information the foregoing items provide to me (especially as I have to ponder whether or not to restart an apparently hung PC, or wait for some oddball resource consumption spike to work its way through my system) makes a certain amount of celebration entirely worthwhile. Woohoo!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145869.83/warc/CC-MAIN-20200224010150-20200224040150-00245.warc.gz
|
CC-MAIN-2020-10
| 4,264 | 5 |
https://ecosystem.atlassian.net/browse/PLVS-339?src=confmacro
|
code
|
Cannot see how to create a sub-task in the VS IDE connector.
This is configured correctly in Jira online (can create subtasks through the browser).
If creating from the main Issues - JIRA panel, then the only task types are the top level tasks. If using the task details panel itself, then under the subtasks tab I can see, but not create new, subtasks.
Windows 7 Ultimate 64-bit (Build 7601: SP1)
Microsoft Visual Studio 2010 Ultimate: Version 10.0.40219.1 SP1Rel
Microsoft .NET Framework: Version 4.0.30319 SP1Rel
Atlassian Connector 1.3.4-20111011-1133
Issue created using feature request button in the VS Atlassian Tool Window.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00538.warc.gz
|
CC-MAIN-2019-26
| 631 | 8 |
http://www.1emulation.com/forums/topic/33497-fbl-14-pixel-perfect-ini-discussion/page-6
|
code
|
Login to Account Create an Account
FBL 1.4 pixel perfect ini discussion
Posted 28 August 2011 - 11:21 PM
Posted 29 August 2011 - 08:26 AM
The horizontal size of Neo Geo games isn't a problem to me because, on real hardware only the vertical size is fixed to the 'pixel perfect' value. The horizontal is different on every monitor. So for FBL defaults I only need to correct the vertical height. It currently defaults to 448 as 224 is the most common vertical resolution in the supported games, so most games will look right with scanlines. But Cave for example uses 240 so this core will need to have it's height adjusted to fix the scanlines.
Yes, this should do the trick for most of the systems presented in FBL rom list. CPS1 and CPS2 have all the very same width x height, except for Mercs and another one that is rotated, if I remember well.
Cave has its size, Psyko has it too...
But thing gets problematic in NeoGeo, since the "official" documentation out there in sites and forums preach NeoGeo as 320x224 screen size games, however there are many titles that are 304x224, verified and tested by me and few other comrades.
So, there are some different variables to take care off...
It couls be wonderful to be able to open a game ALREADY in pixel perfect, right from a clean install... just like CoinOPS, but I think this could be a devilish job.
FBL's video code is a completely different story to MAMEoX. The only way to achieve a 'pixel perfect' setting for every game would be to have the emulator check the hardware flag when starting a game and adjust the size accordingly. The problem with this is that there are many different hardware flags that would need to be factored in and I would need to know the 'pixel perfect' screen sizes for all of them. Plus it could potentially cause so many problems so I don't think it's something I'd be keen to pursue.
Most of them would still be fine. Any games that use 224 vertical resolution would not be changed (that includes all Neo Geo, CPS, PGM, and more). In fact it's probably only Cave games and rotated games for now that would be affected by this. Even if I left all the standard 4:3 games as they are, the 3:4 size definitely needs to be corrected.
If you say me that this idea will not interfere or destroy the current INIs I am already using, this will be no problem at all. If you apply this new system, will my actual INIs (already set in pixel perfect and scanlines goold as gold) keep working normally?
Thanks for those. I don't actually have a 720p setup but the screen values should stay the same regardless of the HD mode so I'm assuming these will work in 480i/p as well.
By the way, I am sending you a PM with *all* my inis.
Please try some different vertical games to see how me and PhilExile use to play.
I bet you will dislike our settings for vertical games (with cropped data out of the screen) but I personally prefere to play this way instead of playing the image blurry and shrink-ed on a very small screen.
- Battle Garegga
- Twin Hawk
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691977.66/warc/CC-MAIN-20170925145232-20170925165232-00523.warc.gz
|
CC-MAIN-2017-39
| 3,095 | 21 |
https://iammdnor.com/2012/08/06/how-to-install-application-in-windows-8-using-windows-store/
|
code
|
This tutorial will show you how to install application in Windows 8 using Windows Store.
1. From Metro UI, click on Store icon.
2. Select the app you want to install. In my case, i choose to install Fresh Paint.
4. If you logged in with a local account, you will be asked to sign in with your Windows Live ID. To avoid this pop up to appear every time you want to install new apps, you need to configure your Windows 8 with Windows Live ID. Click here to see how to configure your mail account in Windows 8.
5. Done. When complete, you will see the tile apps in your Metro UI.
Note: This tutorial was written on Windows 8 Release Preview.
To see my other blog post about Windows 8, click here. If you have a different or better way, please share with us.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00047.warc.gz
|
CC-MAIN-2023-06
| 754 | 7 |
https://www.openshift.com/blog/image-builder-4-fedora-container-build-service-explained
|
code
|
OpenShift Commons Image Builder SIG #4: Fedora Container Build Service and Fedora Cloud Explained
June 21, 2016 | by
In this Video
Adam Miller, (aka @Maxamillion) Senior Software Engineer at Red Hat is the Fedora Release Manager and leading the charge on the Fedora Container Build Service. He is going to give us an walk-thru and overview of the Fedora Container Build Service and a intro to the Fedora Cloud initiative. Special Guest appearance from Matthew Millar, Fedora Community Lead.
Don't forget to leave your feedback and suggestions for each video or in the comments section below. This will be incredibly important to shape this Special Interest Group and create sessions that fit the demands of all the OpenShift developers in the community.
About OpenShift Commons
OpenShift Commons is the place for organizations that are part of the OpenShift community to connect with peers and other related open source technology communities to communicate and collaborate across all OpenShift projects and stakeholders.
The Commons' goal is to foster collaboration and communication between OpenShift stakeholders to drive success for all members, and expand & facilitate points of connection between members for sharing knowledge and experience to help drive success for the platform and for participants: customers, users, partners, and contributors.
We've been publishing a ton of great video content on our YouTube Channel, and streaming daily on Twitch.tv. The schedule for shows can be found on the OpenShift.tv page, and you're always welcome to ...
Just before the Holidays sweep everyone away, we thought it best to present you with some of our favorite bits from our streaming video channel. While the channel is live according to this schedule, ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704839214.97/warc/CC-MAIN-20210128071759-20210128101759-00414.warc.gz
|
CC-MAIN-2021-04
| 1,761 | 10 |
https://cloudhedge.io/dictionary/microservices/
|
code
|
Dictionary of Modernization
"Microservices - also known as the microservice architecture - is an architectural style that structures an application as a collection of services that are - Highly maintainable and testable - Loosely coupled - Independently deployable - Organized around business capabilities - Owned by a small team The microservice architecture enables the rapid, frequent and reliable delivery of large, complex applications. It also enables an organization to evolve its technology stack. The microservice architecture is not a silver bullet. It has several drawbacks. Moreover, when using this architecture there are numerous issues that you must address."
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653764.55/warc/CC-MAIN-20230607111017-20230607141017-00011.warc.gz
|
CC-MAIN-2023-23
| 674 | 2 |
https://www.oreilly.com/library/view/microsoft-sql-servertm/9780470179543/9780470179543_installing_sql_server.html
|
code
|
I.4.1. Installing SQL Server
Deploying SQL Server 2008 on your computer is much less complicated than you might think. However, even if you have a screamingly fast server, completion can take some time; you probably have enough time to hit the gym, shower, and grab a sandwich after the actual file copying is underway.
When you determine your system is up to snuff and you're ready to get started, here's what to do:
Run the Setup.exe application from your SQL Server installation CD.
In many cases, inserting the media triggers the installation application to start automatically.
If necessary, install the .NET Framework and accept its license terms.
Assuming you have an Internet connection, SQL Server will automatically retrieve this software from Microsoft's servers.
Review your options in the SQL Server Installation Center.
As you can see in Figure 4-1, the SQL Server Installation Center offers several helpful paths, including hardware and software requirements, upgrade options, and SQL Server samples.
Click on the Installation option from the SQL Server Installation Center.
This brings up a new dialog box, shown in Figure 4-2 that offers a number of different installation trajectories, including new stand-alone installations, clustering configurations, upgrades, and so on. In this case, ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00027.warc.gz
|
CC-MAIN-2022-27
| 1,310 | 11 |
https://md.ekstrandom.net/code/rstash/
|
code
|
rstash is a pair of programs for dropbox-oriented network file
The server program
rstash-recv, run via
inetd, provides a set of
dropboxes into which other hosts can place files using the client
rstash-send. My current use case for it is transmitting
backup files from VMs to the host machine without opening up
automated-access shell accounts, NFS, or some other element of
overkill that I then have to lock down.
rstash-recv is intended to be run from
inetd as some low-privilege
user, only having the privileges necessary to write to the file
The source installation contains and installs man pages for rstash and both utilities. For convenience, they are also available in HTML form here:
rstash is currently in alpha status; it seems to work, but it may (probably does) have bugs.
Download and installation
Installation is autotooled, so do the usual configure/make/make
install dance. After installing, set up a configuration file (default
SYSCONFDIR/rstash-recv.conf) and add an entry to your
inetd configuration to activate the server.
The source code is under the MIT license.
If you want to monitor progress on rstash, the development sources are available from my Mercurial repository at [http://bitbucket.org/mdekstrand/rstash/]. It's a BitBucket repository, so the bug tracker is there too.
rstash should work on any Unix-like system. It has been tested on GNU/Linux (Debian Lenny and Ubuntu 8.04) and FreeBSD 6.4 (client only).
The only dependency for rstash outside of a functioning build system and Unix-like operating system is mhash.
rstash is not the only solution to file transfer problems. There are, of course, ftp, ssh, nfs, and other heavyweight systems. As discussed above, rstash was designed for situations where these are undesirable overkill and unnecessarily difficult to secure.
Comparison with sendfile
Since writing rstash, I have been advised to the existence of sendfile, a system for asynchronously transferring large files between users on different hosts. It looks like sendfile can solve some of the same problems rstash was intended to, although sendfile is oriented towards user-to-user transfers and rstash towards host-to-host transfers (system administrators running backups, etc.). rstash has the following benefits over sendfile:
- Authentication for incoming file connections -- only clients with the authentication key can send files.
- Server runs as an unprivileged user.
- No receive program/instance -- files show up directly in the drop box's directory on the host system.
Disclaimer: I have not used sendfile myself, merely downloaded it and perused its documentation to see what I missed in my initial search for existing work. If my assessment here is incorrect, please let me know.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00633.warc.gz
|
CC-MAIN-2023-06
| 2,737 | 30 |
https://rtw.ml.cmu.edu/rtw/kbbrowser/entity.php?id=person%3Apeople_no_matter
|
code
|
literal strings: people no matter
Help NELL Learn!
NELL wants to know if this belief is correct.
If it is or ever was, click thumbs-up. Otherwise, click thumbs-down.
- LE @1084 (99.1%) on 07-dec-2017
NELL has only weak evidence for items listed in grey
- CPL @1096 (75.0%) on 18-jan-2018 [ "arg1 happen to good arg2" "arg1 done through other arg2" ] using (things, people_no_matter)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00739.warc.gz
|
CC-MAIN-2023-06
| 382 | 7 |
http://www.tomshardware.com/forum/252143-30-bonehead-question-cables
|
code
|
Thanks for the reassurance. I was worried because, in the specs on Newegg, they don't give quantities for the cables. So according to that list it comes with at least one, but no indication of exactly how many (and I need two). However, if you good people can see the cables in the pics, then I guess that clears that up. Plus I just checked the Asus site and it does list 4 SATA cables. So just a listing oversight by Newegg I guess.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104560.59/warc/CC-MAIN-20170818024629-20170818044629-00449.warc.gz
|
CC-MAIN-2017-34
| 434 | 1 |
https://marco.org/2009/12/17/what-if-copyright-infringement-were-made
|
code
|
What if copyright infringement were made completely impossible? What if we had perfect enforcement at the technical level? (I know this isn’t possible, but bear with me. It’s a “thought experiment”.)
Music and video sites would instantly and perfectly detect any copyright infringement in uploaded files and refuse to host them. People would be forced to create (or find) content that’s licensed permissively enough, such as under the Creative Commons, to allow their usage. We’d give the big music and video publishers exactly what they think they want. But it would actually demolish them.
It would be the best thing that ever happened to those who speak so strongly against “all rights reserved”-style copyright enforcement.
Today’s demand for permissively licensed content is nearly zero because most people can get away with small-scale infringement. If that were no longer possible, all of these infringements would be replaced by much more demand for permissively licensed content. Any publishers unwilling to satisfy the demand would be left in the dust by those who would.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00023.warc.gz
|
CC-MAIN-2022-33
| 1,099 | 4 |
https://www.percona.com/doc/percona-server/5.5/release-notes/Percona-Server-5.5.27-28.1.html
|
code
|
Based on MySQL 5.5.27, including all the bug fixes in it, Percona Server 5.5.27-28.1 is now the current stable release in the 5.5 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.5.27-28.1 milestone at Launchpad.
5.5.27-28.0would crash or deadlock in XtraDB buffer pool code. This was caused by incorrect mutex handling in porting of the recently introduced InnoDB code to XtraDB. Bug fixed #1038225 (Laurynas Biveinis).
For general inquiries about our open source software and database management tools, please send us your question and someone will contact you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159193.49/warc/CC-MAIN-20180923095108-20180923115508-00383.warc.gz
|
CC-MAIN-2018-39
| 628 | 3 |
http://maemo.org/downloads/product/raw/Maemo5/espeak/?org_openpsa_qbpager_net_nehmer_comments_comment_page=2
|
code
|
Great program, comes in handy for geeky stuff :D
Speech synthesizer for English and other languages
eSpeak is a compact open source software speech synthesizer for English and other languages. This software can only be run from the command line. You can find more information about eSpeak at http://espeak.sourceforge.net/
You must be logged in to make comments.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00098-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 362 | 4 |
https://devblogs.microsoft.com/dotnet/net-framework-4-8-is-available-on-windows-update-wsus-and-mu-catalog/
|
code
|
.NET Framework 4.8 is available on Windows Update, WSUS and MU Catalog
We are happy to announce that Microsoft .NET Framework 4.8 is now available on Windows Update, Windows Server Update Services (WSUS) and Microsoft Update (MU) Catalog. This release includes quality and reliability fixes based on feedback since the .NET Framework 4.8 initial release.
.NET Framework 4.8 is available for the following client and server platforms:
- Windows Client versions: Windows 10 version 1903, Windows 10 version 1809, Windows 10 version 1803, Windows 10 version 1709, Windows 10 version 1703, Windows 10 version 1607, Windows 8.1, Windows 7 SP1
- Windows Server versions: Windows Server 2019, Windows Server version 1809, Windows Server version 1803, Windows Server 2016, Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2 SP1
Note: Windows 10 May 2019 Update ships with .NET Framework 4.8 already included.
The updated .NET Framework 4.8 installers (which include the additional quality and reliability fixes) are available for download.
Quality and Reliability Fixes
The following fixes are included in this update:
- Fixed System.Web.Caching initialization bug when using ASP.NET cache on machines without IIS. [889110, System.Web.dll, Bug]
- Fixed the ability to select ComboBox edit field text using mouse down+move [853381, System.Windows.Forms.dll, Bug]
- Fixed the issue with interaction between WPF user control and hosting WinForms app when processing keyboard input. [899206, WindowsFormsIntegration.dll, Bug]
- Fixed the issue with Narrator/NVDA announcing of PropertyGrid’s ComboBox expanding and collapsing action. [792617, System.Windows.Forms.dll, Bug]
- Fixed the issue with rendering “…” button of PropertyGrid control in HC mode to draw button background and dots contrasted. [792780, System.Windows.Forms.dll, Bug]
- Fixed a handle leak during creation of a Window in WPF applications that are manifested for Per Monitor DPI V2 Awareness. This leak may lead to extraneous GC.Collect calls that can impact performance in Window creation scenarios. [845699, PresentationFramework.dll, Bug]
- Fixed a regression caused by the bug fix involving bindings with DataContext explicitly on the binding path. [850536, PresentationFramework.dll, Bug]
- Fixed crash due to ArgumentNullException when loading a DataGrid containing a ComboBox while automation is active. For example, when navigating Visual Studio to the Text Editor\C#\Code Style\Naming page in Tools\Options. [801039, PresentationFramework.dll, Bug]
You can see the complete list of improvements for .NET Framework 4.8 in the .NET Framework 4.8 release notes.
Knowledge Base Articles
You can reference the following Knowledge Base Articles for the WU/WSUS/Catalog release:
|OS Platform||.NET Framework 4.8 Redistributable||.NET Framework 4.8 Language Pack|
|Windows 7 SP1/Windows Server 2008 R2||KB4503548||KB4497410|
|Windows Server 2012||KB4486081||KB4087513|
|Windows 8.1/Windows Server 2012 R2||KB4486105||KB4087514|
|Windows 10 Version 1607||KB4486129 (Catalog Only)||KB4087515 (Catalog Only)|
|Windows 10 Version 1703||KB4486129||KB4087515|
|Windows Server 2016||KB4486129 (Catalog Only)||KB4087515 (Catalog Only)|
|Windows 10 Version 1709||KB4486153||KB4087642|
|Windows 10 Version 1803||KB4486153||KB4087642|
|Windows Server, version 1803||KB4486153||KB4087642|
|Windows 10 Version 1809||KB4486153||KB4087642|
|Windows Server, version 1809||KB4486153 (Catalog Only)||KB4087642 (Catalog Only)|
|Windows Server 2019||KB4486153 (Catalog Only)||KB4087642 (Catalog Only)|
How is this release available?
.NET Framework 4.8 is being offered as a Recommended update. The reliability fixes for .NET Framework 4.8 will be co-installed with .NET Framework 4.8. At this time, we’re throttling the release as we have done with previous .NET Framework releases. Over the next few weeks we will be closely monitoring your feedback and will gradually open throttling.
While the release is throttled, you can use the Check for updates feature to get .NET Framework 4.8. Open your Windows Update settings (Settings > Update & Security > Windows Update) and select Check for updates.
Note: Throttled updates are offered at a lower priority than unthrottled updates, so if you have other Recommended or Important updates pending those will be offered before this update.
Once we open throttling, in most cases you will get the .NET Framework 4.8 with no further action necessary. If you have modified your AU settings to notify but not install, you should see a notification in the system tray about this update.
The deployment will be rolled out to various geographies globally over several weeks. So, if you do not get the update offered on the first day and do not want to wait until the update is offered, you can use the instructions above to check for updates or download from here.
Windows Server Update Services (WSUS) and Catalog
WSUS administrators will see this update in their WSUS admin console. The update is also available in the MU Catalog for download and deployment.
When you synchronize your WSUS server with Microsoft Update server (or use the Microsoft Update Catalog site for importing updates), you will see the updates for .NET Framework 4.8 published for each platform.
.NET Framework 4.8 can be downloaded and installed manually on all supported platforms using the links from here.
In addition to the language neutral package, the .NET Framework 4.8 Language Packs are also available on Windows Update. These can be used if you have a previous language pack for .NET Framework installed as well as if you don’t, but instead have a localized version of the base operating system or have one or more Multilingual User Interface (MUI) pack installed.
Blocking the automatic deployment of .NET 4.8
Enterprises may have client machines that connect directly to the public Windows Update servers rather than to an internal WSUS server. In such cases, an administrator may have a need to prevent the .NET Framework 4.8 from being deployed to these client machines to allow testing of internal applications to be completed before deployment.
In such scenarios, administrators can deploy a registry key to machines and prevent the .NET Framework 4.8 from being offered to those machines. More information about how to use this blocker registry key can be found in the following Microsoft Knowledge Base article KB4516563: How to temporarily block the installation of the .NET Framework 4.8.
What do I need to do if I already have .NET Framework 4.8 product installed and want the reliability fixes?
If you installed .NET Framework 4.8 via Download site earlier, then you need to reinstall the product using the links at the top of the blog.
Do I still need to install updated .NET Framework 4.8 if I am getting .NET 4.8 from Windows Update/WSUS?
No, .NET Framework 4.8 via Windows Update and WSUS will install the product and the included reliability fixes.
Will Windows Update offer the updated .NET Framework 4.8 if I already have the RTM version (4.7.3761) of .NET 4.8 installed?
Yes, Windows Update will offer the .NET 4.8 product update to machines that have the RTM version (4.7.3761) of the product already installed. After the update you will see the new version (4.7.3928) of files that were included for the reliability fixes.
Will the RTM version (4.7.3761) of the .NET Framework 4.8 installers still work if I had downloaded them earlier?
Yes, the installers will still work, but we recommend that you download the latest versions of the installers as per the links above.
Will the detection key (Release Key) for the product change after I install the updated .NET Framework 4.8?
No, the Release key value in the registry will remain the same. See here for guidance on product detection and release key value for .NET 4.8.
How can I get the reliability fixes for Windows 10 May 2019 Update (Version 1903)?
These reliability fixes will be available via the next .NET Framework Cumulative update for Windows 10 May Update (Version 1903).
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540551267.14/warc/CC-MAIN-20191213071155-20191213095155-00077.warc.gz
|
CC-MAIN-2019-51
| 8,050 | 59 |
https://www.ruby-forum.com/t/rails-2-question/205061
|
code
|
I got a project that is running on Rails 2.3.5 on ruby 1.8.6. I am
used to working with rails 3.0 where, when I get the codebase, I use
bundler to install my gems, as per declared in the gemfile. What was
the predecessor to bundler? Is there a way to quickly set up my
environment. And where is the gemfile…all I see if a few gems
listed in the environment.rb file?
Any help would be greatly appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00372.warc.gz
|
CC-MAIN-2021-25
| 406 | 7 |
https://docs.anchore.com/current/docs/overview/notifications/jira/
|
code
|
Notifications to Jira are in the form of new issues in a project
To receive notifications
- The following Jira account and project related information is required
- URL of the Jira project
- Username of the account for creating issues
- API token or password depending on the Jira project
- For Jira Cloud projects an API token is required. Follow instructions to create a new API token for the account creating issues
- For Jira Self-managed projects password of the account creating issues is required
- Project key, same as the prefix in the issue identifier. For instance, issue
TEST-1has the project key
- Type of the issue created such as
- (Optional) priority assigned to the issue such as
- (Optional) one or more labels to be assigned to the issue
- (Optional) Jira user to be assigned to the issue
- Create a Jira endpoint configuration in the Notifications service either via Enterprise UI or the API directly
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
Last modified January 6, 2020: Update links to UI docs (717e085)
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057479.26/warc/CC-MAIN-20210923225758-20210924015758-00467.warc.gz
|
CC-MAIN-2021-39
| 1,115 | 19 |
https://pietschsoft.com/post/2007/07/03/707-godaddy-promo-codes
|
code
|
I don’t usually have any GoDaddy promo code on hand, so I usually search for some. Usually I don’t find any that work. Except this time I search and found one that works, so I think I’ll share it.
OYH3 - $2 Off / $6.95 any .COM
I just used the above code on registering a new domain and renewing an existing one, and it was applied to both.
This blog post is licensed under the Creative Commons Attribution 3.0 United States License, unless explicitly stated otherwise within the blog post content. All other content on this website is not licensed under Creative Commons licensing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00015.warc.gz
|
CC-MAIN-2019-30
| 588 | 4 |
https://discourse.flucoma.org/t/musical-use-of-descriptors-discussion/76
|
code
|
Sam and I are wanting to do some work on the musical use of descriptors, we are interested in gathering stories about issues, interesting usages, points of technique, usefulness of various descriptors etc. etc. that arise when attempting to use this things for musical/composition/performance purposes (rather than MIR).
I had some interesting discussions at the plenary with people about these things and we’d love to start a larger conversation here with the hope that we can put something together to publish. For now the shape of that is very open, so if you are interested we’d love to have your thoughts. PA and team have agreed that it is of interest to them to see the conversation, hence I’m keeping it here.
This is of great interest to me as well.
In my big sound collection my familiarity with certain subfolders varies from ‘well known’ to ‘somewhat familiar’ to ‘unknown’. Methods of exploring descriptors on these different categories varies obviously.
All analysis have been calculated with Alex’ descriptors~ in non-realtime.
For example, I analyzed a collection of 250 soprano saxophone multiphonics. All values are mean over total duration of sound.
The range of inharmonicity is only from 0.005 to 0.321, while to my ears they are all pretty ‘inharmonic’.
Harmonic ratio on the same dataset ranges from 0.319 to 0.537. smf is 0.000 to 0.003.
All this just to say that the output is much less intuitive than I would have imagined.
Here are a couple of useful combinations:
durations above 3 seconds (to get sustained sounds)
energyMax below -25dB
this yields soft sustained multi phonics
AttackLoudness is one I added to the game - this is just the energyAbs over the first 200ms of the sound.
Very handy on percussive or any ‘attack’ related sound.
Pitchdeviation comes in handy to exclude or search for glissandi and other pitch contours.
I have either a bug or did not understand the linear spread. My values range from 124732.805 to 920360.997. No idea what I’m measuring here. Sorry for my ignorance.
lin_brighness is very useful in general.
Just as a start of diving more into this subject.
Thanks. I think Lin spread is broken in that build. At the time I was writing the object I didn’t realise that sfm is much more manageable in dB form, so I’d suggest converting it with atodb to yield more useful values.
As some of you may have already picked up I’m currently interested in “automatic symbolic transcription” of some sense (that’s why I was asking).
What I use quite a lot is searching for melodic intervals in datasets, and what I have now is at the current stage relatively clumsy and unreliable. It works, but only because I know it doesn’t and I know how to trick it But before delving more into anything, I’ll need to test the inputs you have already given me during the last few days
Another two cents: I’ve used your foote descriptor over and over, I find that always extremely valid to distinguish between “fluid, steady stuff” and “uptempo, highly moving things”. That was a sort of huge time saver. “Ballad versus up tempo” ahah
Interesting topic indeed. I presume I use descriptors in 3 different yet similar ways:
for composing granulation in real-time. I have a circular buffer that I analyze as it comes (50 ms grain size, 10ms hop size) and I analyze 3 values (pitch, energy, and a timbral descriptor that changes with my current tastes - sfm or centroid mostly) and after the calibration I showed you all, I’m certain of the ballpark of values I get so I can compose real-time granulation beyond the on-off. For instance:
I showed you all the clouds of quiet pitched material,
the beginning of my soprano piece is doing random stutter of mezzo-forte noises from the singer.
I also do some cool alignment in my last piece (super long grains as looper with sync’d loudest points) to make a chamber music ensemble tight
And I’ve also done a granular looper that skips some grains so that makes auto-edited loops.
Fun stuff over the last 8 years thanks to Alex’s objects. The other usages are in the same vein:
I’ve pre-analysed some very dirty modular synth files, and I’ve used the real-time stream of descriptors of #1 above, offset and scaled to allow rich overlapping descriptor spaces, to make some cool synth variations of a live gesture. It can be heard in the quiet sections of my piece mono no aware
I’ve done some sampling of my modular, as I explained around the table: the patch is controlling one value through my Expert Sleeper’s ES-3+6 and I collect the same 3 descriptors above for each ‘control’ value. I can then query via descriptors again, or via fixed desired values. So far I’ve only used this as pitch tracker on complex patches, but I had loads of fun to make very synthetic birds… I was using the query for the pitch, and the stream of spectral centroid to open a filter (with some mapping), and the stream of power to open the VCA. It was great fun!
I hope some of these simple uses will inspire. If anyone want a patch above I’m happy to share.
I made a patch/system/piece that was played by both a vocalist and trumpeter. It used descriptors to chunk up the preceding 16 seconds of sound, and recall grains from that buffer which were most similar to the ‘now’ input of the musician. The caveat was that the 16 second memory was a bit fuzzy, and would have grains/sections removed based on how similar they were to the last 4-5 seconds of incoming audio. The idea was that instead of thinking in terms of a catart like instrument, the system would have some interesting corners for the musician to respond to and compromise with as they improvise.
If anyone is interested in the patch I’d be happy to refactor it a bit and pass it on.
As it was said in the plenary, I think that descriptors are fairly weak in what they can tell us about the sounds especially when the acoustic model is not that close to what we hear. Salient descriptors seem to be centroid, amp, duration and as you increase the complexity of these it becomes harder to milk them for a compositional purpose. The system I spoke of just above this paragraph used MFCC’s and it was quite proficient at picking similar sounds, but by no means is this a perceptual truth that the sounds were the closest - just that the MFCC’s data happened to be numerically similar. There were definitely moments where I was confused by the descriptor matching algorithm, and that another ‘area’ of sound I had heard previously would’ve been a better pick. In my philosophy for that system, I was really just using the descriptor paradigm to shape the output in a semi-logical way. Maybe I could’ve done this with another method of processing/analysis/synthesis and produced similar results, however, what seemed important creatively was working in this way and hinging the system’s behaviour on the differences between computational and perceptual similarity.
As I was going to sleep, dreaming of a better described world, I remembered that Diemo had made this list of people using CataRT and therefore descriptors musically: http://imtr.ircam.fr/imtr/CataRT_Music
With the sounds being made by the instrumentalists here, it would make sense that the MFCC would be the best descriptor. Pitch isn’t going to get you very far with this material, I don’t think, unless combined with a kind of gate that ignores the noisier sounds and includes the less noisy. I imagine spektral centroids would be useful as well.
I think the grain size/envelope might be distorting your results, however. With the grains, I am perceiving the envelope as much, if not more than the timbre of the grain, which blurs my perception of the correlation between computer and live signal. This makes me think that we could use descriptors to control the envelope time, overlap, and duration so that the grain shapes more “correctly” match the material.
I used descriptors last year to control real-time synthesis: oscillators and such. The piece used silent brass mutes on trombones, so basically the trombones were used as controllers for synthesis arrays. You could not hear their acoustic sound.
I found that pitch, amplitude, onset, and centroids were the things that worked. Because you couldn’t hear the trombone, I really didn’t have to worry about things like matching pitch or sounds. Centroids and pitches were basically used like sliders that I could map to any range, and I just manipulated and clipped the ranges until the sound and control I wanted emerged.
I found that the best things happened when: multiple descriptors were used concurrently and a single descriptor mapped to multiple possibly unrelated variables - for instance, in one case, i used the centroid as the control of the index of modulation of an FM patch and also to control the envelop of the attack and the pitch of a percussive oscillator sweep. Centroid is a weirdly good controller of attack, as it will change drastically over the course of the attack envelop (see Hans’s AttackLoudness above).
This is only semi-related as I have nothing concrete to add, but something I’ve been wanting to figure out is a way to get meaningful descriptor data out of onset/transient-based music, where there is little to no sustain after the initial attack.
I spent a bit of time brainstorming/patching with Alex to get something that reliably spat out loudness/centroid/sfm (and perhaps pitch even) for a given drum attack. Obviously there are a lot of problems and compromises there, with the biggest concern for me being latency.
The dream, in this context, would be to be able to have some sense of a couple descriptors (loudness/centroid specifically, but sfm/pitch would be nice) in the amount of time it takes for normal-ish onset detection (<10ms).
This AttackLoudness idea is great, for offline (or delayed realtime) use. I started building something, but never finished it, that kept track of some ‘gestural’ data for each attack. Not useful for immediate/onset use, but my thinking was that it could be useful to create some kind of descriptor gesture/vectors which could be used elsewhere in the patch. (My exact use case for this was going to be to create “long” sounds concatenated together from a pool of samples based on the stretched micro gestural information from a short attack).
@spluta At times the duration and period parameters of the granulator hover around values that cause the grain to be significantly distorted as you put it. Also, when the duration is lower than the period you lose the continuity of the sound and it can disrupt that timbral fidelity.
In regards to your silent trombone piece, I like the use of the centroid as a kind of dirty envelope follower!
It’s a stereo file with the L channel being the audio recorded directly out of a Sensory Percussion drum trigger (you put a metal dimple on the drum and it has a hall sensor pick up the “audio”), and the R channel being a DPA 4060 recorded at the same time.
The software for the Sensory Percussion sensor does some cool machine learning stuff and is crazy fast, so I’m trying to replicate some of what it does, but without being inside their sandboxed “app” (or using the 7-bit MIDI output from it).
I will probably tweak the net version of descriptors~ (or similar) to not ever use info from incomplete analysis windows - the meaning of that data and possibility of skewing the result is just too high
I think the comment about zl stream refers to zl stream into zl median which is a simple median filter - that adds latency but can remove outliers in a convincing manner.
Alex gave a talk on descriptors. He spoke of many wise things. It made me think that I forgot to add to my list in this thread that I’ve done mute electric bass guitar as controller for corpus navigation through descriptor since 2009 in this paper (http://eprints.hud.ac.uk/id/eprint/7421/) An even more naive version was implement in Sandbox#2 in 2007.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00164.warc.gz
|
CC-MAIN-2022-27
| 12,036 | 53 |
http://sakrobotix.com/auto_botix.php
|
code
|
AutoBotix Workshop is an advanced robotics workshop, where a student learns & develops any type of autonomous robots. Intelligent robots designed by the participants using sensors, motors & programming. The participants get a 100% hands-on experience on WMR design, then they interface the sensor to the development board, where they learn positioning of the sensors too. Interfacing microcontrollers to motor driver & controlling motor through port programming are the key takeaway in the workshop.
AutoBotix Kit Details:
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495367.60/warc/CC-MAIN-20190220170405-20190220192405-00191.warc.gz
|
CC-MAIN-2019-09
| 522 | 2 |
https://youaccel.com/notes.php/HTML---Introduction-to-Forms/272
|
code
|
In this section of the course we are going to be creating an input form using HTML. A form is used to gather input from your user. Forms are the most common method of data collection from website visitors. The data that is inputted into the form can also be stored into a database or submitted to an email address.
Keep in mind that in order for a form to work and transmit data, we need to use a programming language that communicates with our web server. Later in this course we will be exploring PHP and working more with data processing. For now, in the HTML section of this course, we will only be creating the front end of the form - what the user sees through their web browser.
Let's take a look at an example of a basic contact form on the web:
On this form, the user is able to submit an inquiry directly to the website support staff by completing the required fields (Name, Email, Subject, and Message).
If we try to submit the form without filling in any information, we can see a number of errors appear. This is known as form validation.
Let's take a look at a diagram that illustrates how a form is processed once the user has clicked the send button.
As a reminder we will be working with form data processing later in this course.
For now we will begin with developing just the front end of the form using HTML.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439213.69/warc/CC-MAIN-20200604063532-20200604093532-00419.warc.gz
|
CC-MAIN-2020-24
| 1,328 | 8 |
https://www.mendeley.com/catalogue/4fa2f491-7d9e-3314-8f99-1506c7e500dd/
|
code
|
This article is free to access.
Background: DNase-seq and ATAC-seq are broadly used methods to assay open chromatin regions genome-wide. The single nucleotide resolution of DNase-seq has been further exploited to infer transcription factor binding sites (TFBSs) in regulatory regions through footprinting. Recent studies have demonstrated the sequence bias of DNase I and its adverse effects on footprinting efficiency. However, footprinting and the impact of sequence bias have not been extensively studied for ATAC-seq. Results: Here, we undertake a systematic comparison of the two methods and show that a modification to the ATAC-seq protocol increases its yield and its agreement with DNase-seq data from the same cell line. We demonstrate that the two methods have distinct sequence biases and correct for these protocol-specific biases when performing footprinting. Despite the differences in footprint shapes, the locations of the inferred footprints in ATAC-seq and DNase-seq are largely concordant. However, the protocol-specific sequence biases in conjunction with the sequence content of TFBSs impact the discrimination of footprint from the background, which leads to one method outperforming the other for some TFs. Finally, we address the depth required for reproducible identification of open chromatin regions and TF footprints. Conclusions: We demonstrate that the impact of bias correction on footprinting performance is greater for DNase-seq than for ATAC-seq and that DNase-seq footprinting leads to better performance. It is possible to infer concordant footprints by using replicates, highlighting the importance of reproducibility assessment. The results presented here provide an overview of the advantages and limitations of footprinting analyses using ATAC-seq and DNase-seq.
Karabacak Calviello, A., Hirsekorn, A., Wurmus, R., Yusuf, D., & Ohler, U. (2019). Reproducible inference of transcription factor footprints in ATAC-seq and DNase-seq datasets using protocol-specific bias modeling. Genome Biology, 20(1). https://doi.org/10.1186/s13059-019-1654-y
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474663.47/warc/CC-MAIN-20240226194006-20240226224006-00530.warc.gz
|
CC-MAIN-2024-10
| 2,082 | 3 |
https://www.se7ensins.com/forums/threads/i-need-help-with-this-bull-bridging.63858/
|
code
|
I am no noob when it comes to glitching and ****, I have done pretty much everything. However, I havent bridged and used zone alarm since halo 2 so idfk what is wrong here. I do everything, add ips, etc, bridge connections and ****s working. But, zone alarm does not stop the traffic to my xbox and I can connect to any match even when my firewall is on high???? Does anyone know what is wrong? Vista laptop using wireless bridge.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00009.warc.gz
|
CC-MAIN-2018-51
| 430 | 1 |
https://www.tr.freelancer.com/projects/engineering-matlab-mathematica/bayesnet-classifier/
|
code
|
A Matlab implementation of the BayesNet classifier is required. The classifier should be implemented the exact way as it’s implemented in WEKA but in Matlab code i.e. it should be able to load the dataset from .csv file , perform 10 fold cross validation , and then the output should be as follows:
1- Classification accuracy.
2- Number of true positives.
7- ROC area.
8- Plot ROC curve.
As I mentioned before same as the WEKA random forest classifier but the code should be written in Matlab.
Bu iş için 9 freelancer ortalamada $439 teklif veriyor
I am professional matlab programer: [url removed, login to view] Let disscuss.
Dear Sir, We are Reserach and Development company whose working area are : -Digital Motor Control -Analog Design -Electronic Design -Power Electronics -PCB design -Embedded system -Matlab -Simulation of ha Daha Fazla
Hi, I have 15 years experience writing algorithms in Matlab. Please see PM.
I have experience in both data mining and matlab. i am redy for work. can we discuss further? thanks
hello sir , I would be a very grateful to work on this project. Please do consider my bid. With regards, Mani
i m an Electrical Engineer ..i have exp. in matlab..give this job to me..
Hi, Please see my personal message. Regards BELGASOFT
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936833.6/warc/CC-MAIN-20180419091546-20180419111546-00738.warc.gz
|
CC-MAIN-2018-17
| 1,263 | 14 |
http://www.smartphonemag.com/cms/blogs/cat/223
|
code
|
Some months ago, XDA-Devs forum member “cook†has written a quick patcher app that does this (semi-)automatically.
This is how you can store your ActiveSync-synchronized Outlook mail on your storage card! Dell Axim x50, HP iPAQ hx4700 and hx2x
For a long time, I’ve thought there is no way of relocating ActiveSync-synchronized Outlook mail (that is, mail that ActiveSync synchronizes from/with your desktop Outlook; not to be confused with mail you download straight from your POP3/IMAP mailboxes on your PDA without any ActiveSync synchronization!) to memory cards.
Now, the situation has changed – you can store all your mail bodies (not just the attachments) on storage cards! This is handy for everyone (to lessen the load on the main storage) and particularly for people that have upgraded their ‘legacy’ WM2003SE Dell or HP devices to WM5.
Importance for WM5-upgraded Dells and HP devices
This hack is of extreme importance to Dell Axim x50(v), HP iPAQ hx4700, hx2x1x and hx275x users that have upgraded to WM5. As I’ve pointed out several times (for example here), you MUST reduce writing/deletion to/from the Flash ROM for these devices to be usable (that is, to avoid the filesys.exe compaction ‘kicking in’). This also means avoiding synchronizing Outlook mail with WM5-upgraded devices because, by default, they are all stored in the main storage. Now, with this hack, you can freely and safely synchronize your mail on these devices without lengthy filesys.exe compactions!
And, of course, the hack is very important for anyone wanting to store more than a handful of his or her mails on his or her PDA to keep the built-in free memory as large as possible, independent of the Pocket PC model.
Everything you need to know about flushing the Registry and WindowsCE databases under Windows Mobile 5
I’m constantly asked about why I keep telling users to power off (suspend; “full†power off is an overkill and is not needed!) their Pocket PC’s after making some registry changes and before resetting their Pocket PC’s. As the answer also contains some really interesting stuff highly useful particularly to WM5 users that have a non-natively WM5 Pocket PC (the Dell Axim x50(v), the hx4700 or the WM2003SE-based HP iPAQ hx2xxx series), I devote an entire article to it.
If you have a non-native WM5 device with WM5, you will really want to read at least the second part of this article so that you can make the “filesys†burden (the worst thing on all non-native WM5 Pocket PC’s) lesser.
If you’re a Pocket PC guru/hacker and (also) interested in how WM5 works, the first part of the article will be of definite interest to you too. Note that understanding it is not needed for casual WM5 users to implement the changes in the second part!
Upon several users' requests, I've created a version for the hx2xxx series of the filesys.exe throttler application already known on the WM5-upgraded iPAQ hx4700.
It will greatly reduce the load caused by the infamous (make a generic search for it on Pocket PC boards or here, in the PPCMag expert blog!) filesys.exe on the WM5-upgraded HP iPAQ hx2xxx Pocket PC's.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123549.87/warc/CC-MAIN-20170423031203-00376-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 3,227 | 13 |
https://supergenerator.net/generators/random-ip-generator
|
code
|
Generate a completely random IP address.
Our random IP address generator will generate five IPv4 addresses at a time. To get more IPv4 addresses, click the generate button.
IP addresses are numbers assigned to each machine connected to a network that uses the Internet Protocol (IP) for communication. It identifies the network and provides location addressing.
This generator will generate Internet Protocol version 4 (IPv4) addresses, which is a 32-bit number such as 22.214.171.124. Note that this is different from IPv6 addresses, such as 2001:db8:0:1234:0:567:8:1.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00000.warc.gz
|
CC-MAIN-2022-33
| 569 | 4 |
http://www.linuxquestions.org/questions/linux-software-2/linuxmce-and-android-gmote-723957-print/
|
code
|
LinuxMCE and Android Gmote
I'm building a HTPC and am planning on using LinuxMCE. I've been reading about the extensive capabilities of LinuxMCE and am pretty excited about wiring up my "Smart Home."
What I'm even more excited about is my new G1 and the crazy possibilities that Android offers. There is an Android app that has been out for a while called Gmote that reportedly works with Linux. This is an app that turns your phone into a basic remote for your media center (stop, play, etc.) I believe you can browse files with it as well.
My ultimate goal is to be able to control all aspects of my smart home from my Android phone. I want to be able to control my movies and music, turn on/off electronics, see live video/audio from network cameras on my G1 screen, control light dimming, everything.
I seriously doubt we are anywhere near this kind of seamless integration between the two platforms, but that's what I would like to see someday.
Anyway, I'd like to hear from people using Gmote and LinuxMCE. What has your experience been like? Do they work well together? What are the capabilities and limitations? Is initial setup difficult?
Thanks for your feedback.
HTPC Specs (so far):
Lian-Li PC-C37B Case
Intel Core 2 Duo E8400 - 3.0 GHz
Asus PQ5-E Green - Intel P45 Chipset
Corsair Dual Channel TWINX 4GB PC6400 DDR2
LG Super Multi Blu-Ray/HD-DVD ROM
Ultra LSP650 650-watt Power Supply
Nexus One and LinuxMCE
I know this thread is over six months old, but I believe it is still very relevant. I have just received my first Android phone - Google's Nexus one - and have a linuxMCE installation running. Once I get some spare time I will start working on syncing the two together as Toonses82 has spoke about and keep the community posted on the results. Toonses82, have you had any luck with connections or displays?
You know, I bagged LinuxMCE for now (due to limited spare time) and have just been running openSUSE 11.2 on my media center which works well for most things. I plan on coming back to XBMC or LinuxMCE or something similar soon.
Gmote on my Android phone, however, has been awesome! It is a great app that works well with Linux and VLC.
I'd still like to run my home from my Linux-based smartphone, but I'll have to revisit it when I have more free time. I've noticed this project as well which looks like it's trying to bring a home automation app to android. however, development seems slow if not stalled all-together. I look forward to when I can dive into this again and get my home all set up.
|All times are GMT -5. The time now is 04:56 PM.|
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00241-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 2,575 | 20 |
http://www.codedairy.com/definitions/what-is-ws-rm
|
code
|
WS-ReliableMessaging describes a protocol that allows SOAP messages to be reliably delivered between distributed applications in the presence of software component, system, or network failures.
An Application Source wishes to reliably send messages to an Application2 (destination) over an unreliable infrastructure. Then to accomplish this they make use of a Reliable Messaging Source and a Reliable Messaging Destination The Application Source sends a message to the Reliable Messaging Source. The Reliable Messaging Source ( RMS) uses the WS-ReliableMessaging (WS-RM) protocol to transmit the message to the Reliable Messaging Desitination (RMD). The RMD delivers the message to the Application2 Destination. If for somereason that the RMS can’t transmit the message to the RMD for some reason, it raises an exception or otherwise indicate to the Application (Source) that the message was not transmitted.
The WS-RM protocol defines and supports a number of Delivery Assurances. These are:
- AtLeastOnce – Each message will be delivered to the Application (Destination) at least once. If a message cannot be delivered, an error always will be raised by the RMS or RMD.
- Messages may be delivered to the Application (Destination) more than once
- AtMostOnce – Each message will be delivered to the Application (Destination) at most once.
- ExactlyOnce – Each message will be delivered to the Application (Destionation) exactly once. If a message cannot be delivered, an error will be raised by the RMS or RMD.
- InOrder – Messages will be delivered from the RMD to the Application (Destination) in the order that they are sent from the Application Source to the RMS. This assurance can be combined with any of the above assurances.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706890813/warc/CC-MAIN-20130516122130-00001-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,744 | 8 |
https://medium.com/ramda-adjunct/chore-ramda-adjunct-v2-19-0-release-9a1b012929b2?source=collection_home---6------1-----------------------
|
code
|
Three weeks ago, we’ve released ramda-adjunct v2.19.0. This release brings two distinct new features into the library and one enhancement. Let’s not waste time and let’s dig into it.
Flattens the list to the specified depth. It’s full equivalent to the new Array.prototype.flat function but it’s interface is functional and auto-curried. Under the hood it doesn’t use Array.prototype.flat, but uses custom implementation of flattening from ramda. We can make this even better and use Array.prototype.flat if available, and fall back to custom algorithm only when needed. PRs are welcome! ;]
Returns false if both arguments are truthy; true otherwise. Again this addition is community contribution and can become handy is some logical compositions.
I think I don’t need to explain what this function does ;] This function received an enhancement of supporting ramda placeholder. Let me demonstrate what it means.
Additionally we made some internal changes not visible to outside observer like reorganizing our TypeScript typings and testing them properly using tool called dtslint. We’re currently on version v2.19.3, because we were experimenting with opencollective postinstall scripts. These experiments did not go well…
Like always, I end my article with the following axiom: Define your code-base as pure functions and lift them only if needed. And compose, compose, compose…
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00558.warc.gz
|
CC-MAIN-2019-43
| 1,399 | 6 |
https://sourceforge.net/directory/science-engineering/scientific/language:perl/os:hpux/
|
code
|
Discover Nintex Workflow Cloud - the fastest, easiest way to improve the way you do business. This state-of-the-art technology - built in the cloud, for the cloud - gives you the ability to create powerful, integrated workflows independent of any single platform. With our intuitive, drag-and-drop workflow design canvas, you can configure and deploy automated business processes with speed and simplicity. Try it today!Sponsored Listing
- Other Operating Systems (9)
- Linux (8)
- Modern (8)
- Grouping and Descriptive Categories (7)
- BSD (6)
- Solaris (5)
- Windows (5)
- Mac (3)
- Emulation and API Compatibility (1)
- Artificial Intelligence
- Electronic Design Automation (EDA)
- Human Machine Interfaces
- Information Analysis
- Interface Engine/Protocol Translator
Multiplatform Ham Radio APRS and Mapping Program57 weekly downloads
VisIt is an interactive parallel visualization and graphical analysis tool for viewing scientific data.1 weekly downloads
WEB interface for the design, process and monitor inter-application data flows. It’s a message queue EAI. Written in Perl, Tested systems : Linux and HP-UX, WinXP is planned Supported Connectors : RDBMS, , LDAP, EDI, CSV, XML, HTML etc ...
Build an multi-language plug-in HL7 interface using TCL, GDBM, Perl or Java to parse an HL7 (English) message into Spanish or other languages in real-time. Plug-in to support interface engines as Quovadx, ect.. or custom interface engines.
An online SSL (128-bit strong encryption) repository for scripts created and maintained by Synopsys Design Consultants.
The project supplies a template or skeleton for mainly batch processsing applications which make use of the korn Shell (ksh / pdksh), Perl and other executables (see docs). Functionality @ shell/Perl level: logging, sending email to the support, ...
ApVSys is a general open-source wrapper designed for engineering Unix/Linux environment..It provides a way to use and manage simultaneously different versions of applications (engineering tools, compilers, debuggers simulators, ... ).
Set of tools and libs for managing structured data in a very flexible way: Imp./Exp. ASCII, XML, SQL, PS, Tex/LaTex, RTF GUI: X-Windows, MS-Windows Interface to C++, DBs, Perl, PHP, Java, TCP/IP LISP-like interpreter written in C++ using C-LIB
AI simulation of human life and behavior, based on LINUX architecture.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170186.50/warc/CC-MAIN-20170219104610-00067-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 2,364 | 24 |
https://www.cwnp.com/forums/posts?postNum=307860
|
code
|
The CTS is transmitted by the destination node, not the originator of the RTS. .
I haven't seen a post here from GT in quite a while, although I did see him at last years conference.
He added a lot of good stuff here.
The entry from 'deleted user' sounds like one from dave1234, another user that put lots of effort into detailed technical answers - my apologies if that was not him. I know you can still find several hundred of his posts if you search by user name.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827097.43/warc/CC-MAIN-20181215200626-20181215222626-00505.warc.gz
|
CC-MAIN-2018-51
| 466 | 4 |
http://www.crunchbase.com/person/upendra-shardanand
|
code
|
|Massachusetts Institute of Technology, MENG||1994|
Upendra co-founded his first venture, Firefly Network, as a spin-off of his work at the MIT Media Lab. (Upendra’s graduate thesis centered on collaborative filtering, the recommendation technology now commonplace on the web).
Firefly Network was a pioneer in personalization and several web technologies, and was acquired by Microsoft in 1998.
At Microsoft, Upendra launched Microsoft Passport, and represented Microsoft on industry bodies to further the cause of online privacy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00030-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 533 | 4 |
https://theskypedia.com/how-to-install-python-on-a-mac/
|
code
|
Welcome to our thorough, step-by-step tutorial on installing Python on a Mac. Python is a strong and adaptable programming language utilized in a wide range of industries, including web development, data research, and artificial intelligence. This article will follow you through the installation process and offer helpful tips for a successful Python setup on your Mac, whether you are a novice or an experienced developer.
Why Install Python on a Mac?
Python is a flexible programming language used for many different things, including web development, data analysis, automation, and artificial intelligence. Python can be installed on your Mac to provide you access to a wide variety of libraries and tools that can make your coding work easier. It is a useful tool to have in your programming toolbox because it also enables you to investigate different development options.
Checking Python Installation
Check to see if Python is already installed on your Mac before moving on with the installation. Python is pre-installed on macOS, however, it’s frequently preferable to install the most recent version. Open the Terminal program and enter the following command to determine what Python version is currently installed:
Choosing the Right Python Version
There are several versions of Python available, however the most recent stable version is advised for the best compatibility and support. Python 3.x is the most recent version at the time of writing, with Python 3.9 being the most recent release. Python 2.x versions should not be used as they are no longer supported.
A package manager for macOS called Homebrew makes it easier to install programs and libraries. Open the Terminal and enter the following command to install Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Installing Python Using Homebrew
Python can be installed via Homebrew after it has been set up. Enter the next command in the Terminal:
brew install python
Homebrew will then download and install the latest version of Python on your Mac.
Verifying Python Installation
Run the following command in the Terminal to confirm Python’s installation after the installation is finished:
Setting up Virtual Environments
Python projects and their dependencies can be isolated using virtual environments. They aid in avoiding conflicts between various projects that might need various library versions. You can use the following commands to build a virtual environment:
python3 -m venv myenv source myenv/bin/activate
To deactivate the virtual environment, simply type:
Installing Python with Anaconda
Another well-known package manager with its own collection of pre-installed libraries is called Anaconda. Scientific computing and data science both make extensive use of it. Follow the directions on their official website to install Anaconda.
Uninstalling Python from Mac
Python installed through Homebrew can be removed using the commands below if you need to for any reason:
brew uninstall python
It’s advised against removing the pre-installed Python because doing so could make macOS unstable.
Common Installation Issues and Troubleshooting
You could occasionally run into problems while installing. Here are a few typical issues and their fixes:
- Permission Denied: If you face permission issues during installation, use
sudobefore the installation command to run it with administrator privileges.
sudo brew install python
Broken Dependencies: If Homebrew or other packages are causing conflicts, try running the following commands to fix them:
brew update brew upgrade
Congratulations! Python is now properly installed on your Mac. Python offers a plethora of coding and development options. Python is a great option for a variety of projects, regardless of your programming expertise level, thanks to its simplicity and large community support.
Read more: Road Map to Becoming an IoT Developer
Can I have multiple Python versions installed on my Mac?
It is possible to install numerous Python versions at once. You can manage various Python installations using programmes like Homebrew and Anaconda.
How do I update Python to the latest version?
Run the following command to upgrade Python that was installed using Homebrew: “brew upgrade python”
Is it safe to uninstall the pre-installed version of Python on my Mac?
Since macOS depends on the pre-installed Python for a number of core operations, it is typically not advised to remove it. Its removal might result in unexpected behaviour.
Can I use Python for web development?
Yes, a lot of people utilise Python to construct websites. Python-based web application development is made simple by well-known web frameworks like Django and Flask.
Does Python support GUI (Graphical User Interface) development?
Python does indeed support GUI development. You may design graphical user interfaces for your apps using libraries like PyQt and Tkinter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100651.34/warc/CC-MAIN-20231207090036-20231207120036-00131.warc.gz
|
CC-MAIN-2023-50
| 4,947 | 44 |
https://box.shipping-to.com/box
|
code
|
Use the map below to select the country for your Box Shipping:
Locating a national or international box shipping agent or forwarder is not so difficult if you are given the opportunity to take your time and assess the options available to you. This was our aim in designing this directory. The following list of box shipping experts can give you the advice and service that you demand.
It is only natural that no two overseas box shipping companies are exactly alike as each have their own areas and levels of expertise. These could be highlighted by the size of box shipping that they carry out or the speed with which they can deliver a consignment. Base your decision on your needs and their services.
Box Shipping Message Board:
- I want to sent parcels
I like this cargo seveice.we are not a waste time,so how about your severces.I want to know about this so can you explain to me how the box shipping service works?. ...
- International shipping from Kuala Lumpur to Sydney
Dear Sir / Madam, I wanted to take my luggage from Kuala Lumpur to Sydney. I had a couple of questions if you don’t mind: 1. How should I pack my items like dishes and cloths? I mean where can I get box for packing them. 2. If you ...
- International shipping from bahrain to usa
i would like to ship a container from bahrain to usa .do you make shipping for this route ? i would like to ship a container from bahrain to usa . do you make shipping for this route ? do you make shipping for this route ?do you make shipping for ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00498.warc.gz
|
CC-MAIN-2023-50
| 1,512 | 10 |
https://community.adobe.com:443/t5/dreamweaver-discussions/dreamweaver/td-p/11143868
|
code
|
We have a brand new look! Take a tour with us and explore the latest updates on Adobe Support Community.
Not really, Dreamweaver is just a basic editor. Unless you know how to code you will be better off sourcing a more advanced bit of software which does have more sophisticated options for amatuers.
Try Wappler editor. Id recommend that for those who arent serious developers, more the 'l have an idea, get rich quick types'. When that idea fails they will just move onto the job which pays the most, almost certainly nothing to do with web development as they have no saleable skills unless they are a sales person and can source their own work.
Wordpress also l believe has numerous plugins that will most likely do what you require.
There is nothing uniformed about my opinions. You and a very small group of others are in denial or posssibly don't care because if or when it all fails for them they will try their hand at something else and something else. They are nomads, latching on to whatever provides an opportunity, rather than those that have the conviction to achieve what they are truely interested in. If you are so confident of your opinions then link to a jobs board which specifically requires all these Wappler developers. No thought not, case closed.
You mean you want to build another "YouTube" video sharing site?
Dreamweaver is a very good coding and site management tool. But nothing in Dreamweaver will do this for you automagically.
Look at WordPress and premium video-sharing themes.
would dreamweaver enable me to build a website where users can create their own accounts and upload videos etc
This is akin to saying would a scalpel enable me to perform an operation?
The short answer is 'Yes'.
You would need sound knowledge of HTML, CSS, JS and a sever language like PHP. a good grasp of least one of the popular JS frameworks would be helpful too. If it's a no to these, then you need to look at some other solutions, Webflow seems to be popular these days and would do a lot of the work for you without extensive coding knowledge.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00551.warc.gz
|
CC-MAIN-2021-49
| 2,065 | 12 |
http://www.felgall.com/doswin73.htm
|
code
|
Question: A number of years ago I wrote a business application in the Microsoft BASIC Professional Development Systems that runs compiled in a DOS environment. It relies on sending control code sequences to the printer for proper page formatting. Only a handful of codes are used. The application is still in service, but now the user's computers all run Windows (probably various versions). The legacy BASIC program is run in a DOS window, but nothing comes out of the printer any more. How can this be remedied? Thank you.
Answer: There are three differences between DOS and Windows.
For your program to be able to access the Printer it will need to be amended to call the Windows API to request for Windows to pass the information to the printer. Windows will then print what is requested at a time that doesn't conflict with it printing something else.
Follow-on Question: Thanks for the explanation. That enables me to appreciate the problem. Do you have a recommendation of software or documentation that will explain how my application can communicate with the Windows API?
Answer: To get your program to run on Windows with access to the printer you will probably need to obtain a new compiler that supports Windows and rewrite the code to work with that. The new compiler should come with the necessary documentation on how to call the Windows API.
This article written by Stephen Chapman, Felgall Pty Ltd.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00199-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,415 | 6 |
https://www.template.net/editable/31987/complex-organizational-chart
|
code
|
Making an organizational chart can be a complicated task. It’s like looking for the origin of cancer and how it can be distributed to the rest of the body. That might be an unusual analogy. But the point here is you don’t need to trouble yourself starting from the bottom when you can freely use our Complex Organizational Chart Template. Apart from providing it for FREE, our template is also easy to download and customize in different file formats. Since it already has suggestive content, all you need to do is edit it according to your organization’s requirements. Hurry! Get it now!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00109.warc.gz
|
CC-MAIN-2021-49
| 594 | 1 |
https://forums.support.roxio.com/profile/90327-texdave1212/?do=content&change_section=1
|
code
|
Ok Im trying to make back ups to my Beatles 2009 Remasters cds. I have done first 4 with the audio files ripped with XLD software last month and the Quicktime mini-docs burned onto a cdr. Somehow I capture the mini-doc of each cd, which is the problem, I cant seem to reproduce this step because I forgot how I did it, I didnt write it down, my mistake, ugh! I have Toast 6. Now I know I can copy this disc in the copy tab, but I was wanted the rip audio files from XLD software. This is were I dragged and dropped the audio files into the Data tab section to burn my cd. I tried to drag and drop the quicktime file(mini-doc) into the data tab and it works, but when I put in the blank cdr, up comes the box with files of the OSX quicktime mini-docs saying, "some files couldn't be found. the corresponding items will be removed. check attachment. That Hard Days Night OSX file has an letter icon "A" over it before I try to burn the cd. Now on the first 4 cds I already did last month, the Hard Days Night icon is replaced the with album cover instead of the letter "A".This is were the problem lies. I can click on that icon and it loads the video on the quicktime player and it plays, the album cover (OSX icon) only! See I cant found out how to capture all the missing files u see on the attachment below. Here's a correct cd with the Please Please Me (OSX) showing the album cover instead of the letter "A" over the icon. I would appreciate very needed help.
Correct CD With Correct Mini-Doc(Please Please Me OSX).tiff
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401604940.65/warc/CC-MAIN-20200928171446-20200928201446-00508.warc.gz
|
CC-MAIN-2020-40
| 1,523 | 2 |