prompt
stringlengths
4
3.12k
chosen
stringlengths
3
10.5k
rejected
stringlengths
3
8.98k
why a file-based dependency manager rocks for c/c++
such goodies are always useful ! thanks !
read here and find out why biicode rocks: <link>
why a file-based dependency manager rocks for c/c++
read here and find out why biicode rocks: <link>
it looks good
why a file-based dependency manager rocks for c/c++
it looks good
because it was about f*n time! almost all new languages ship out with one as default. and c/c++ is the most used language in the world (or almost)
why a file-based dependency manager rocks for c/c++
because it was about f*n time! almost all new languages ship out with one as default. and c/c++ is the most used language in the world (or almost)
i think biicode is trying to do (better) what autotools, cmake, and scons do. so comparing to them might be useful. and if it is not trying to do replace those tools, explaining that might be helpful.
clojure distilled
i love clojure, and dream of the day i can work with it.but do you know why more people get into ruby or js? it's community is beginner friendly. most texts on the subject of lisps feel like wannabe dissertations, preferably with the computer modern font and 5 pages long.
i've been working with clojure for 18 months now, after a bit more than 10 years of java. i'm having so much fun i don't think i could ever go back.also, this is an excellent summary / introduction to key clojure concepts, well done to the author.
clojure distilled
i've been working with clojure for 18 months now, after a bit more than 10 years of java. i'm having so much fun i don't think i could ever go back.also, this is an excellent summary / introduction to key clojure concepts, well done to the author.
i am really suggesting to try some classic scheme course first, like my favorite cs61a by brian harvey, or that wonderful racket-based course (what's the difference?) on coursera by gregor kiczales (which is worth watching even if you are mere a brainwashed java dev).another way is through pg's books, especially, of course, &quot;on lisp&quot; and arc tutorial and then arc.arc (original version from the 2009 arc3.tar which isn't cluttered with silly docstrings and other nonsense).there is also the way of cl. for that there is pg's ansi cl, but before that a much better &quot;cl language concepts&quot; from symbolics (pdfs are available from bitsavers).then, perhaps, one will have slightly different view of what clojure is really. yes, it is a real marvel of software craftsmanship, especially that it was originally a solo effort, but no, there is no miracles in it.)
clojure distilled
i am really suggesting to try some classic scheme course first, like my favorite cs61a by brian harvey, or that wonderful racket-based course (what's the difference?) on coursera by gregor kiczales (which is worth watching even if you are mere a brainwashed java dev).another way is through pg's books, especially, of course, &quot;on lisp&quot; and arc tutorial and then arc.arc (original version from the 2009 arc3.tar which isn't cluttered with silly docstrings and other nonsense).there is also the way of cl. for that there is pg's ansi cl, but before that a much better &quot;cl language concepts&quot; from symbolics (pdfs are available from bitsavers).then, perhaps, one will have slightly different view of what clojure is really. yes, it is a real marvel of software craftsmanship, especially that it was originally a solo effort, but no, there is no miracles in it.)
i'm not done reading but it seems like a very nice introduction to people not familiar with the function style of programming.this is also one of the few document that can be converted from html to epub without much pain. it looks quite good with pandoc clojuredistilled.html -o clojuredistilled.epub --indented-code-classes=clojure
clojure distilled
i'm not done reading but it seems like a very nice introduction to people not familiar with the function style of programming.this is also one of the few document that can be converted from html to epub without much pain. it looks quite good with pandoc clojuredistilled.html -o clojuredistilled.epub --indented-code-classes=clojure
extracting keys from maps into a definition, with common-lisp (using my macro defk)(defk login (user pass) (and (string= user &quot;bob&quot;) (string= pass &quot;secret&quot;)))(login (list-to-hash '(user &quot;bob&quot; pass &quot;secret&quot;)) (list-to-hash '(user &quot;bob&quot; pass &quot;secret&quot;))given the following definitions:(defmacro defk(name h-list &amp;rest body) `(defun ,name (h) (let ,h-list ,@(loop for k in h-list collect `(setq ,k (gethash (quote ,k) h))) ,@body)))(defun list-to-hash(list) (let ((h (make-hash-table))) (loop for (k v) on list by #'cddr do (setf (gethash k h) v)) h))
john carmack on inlined code
the older i get, the more my code (mostly c++ and python) has been moving towards mostly-functional, mostly-single static assignment (let assignments).lately, i've noticed a pattern emerging that i think john is referring to in the second part. the situation is that often a large function will be composed of many smaller, clearly separable steps that involve temporary, intermediate results. these are clear candidates to be broken out into smaller functions. but, a conflict arises from the fact that they would each only be invoked at exactly one location. so, moving the tiny bits of code away from their only invocation point has mixed results on the readability of the larger function. it becomes more readable because it is composed of only short, descriptive function names, but less readable because deeper understanding of the intermediate steps requires disjointly bouncing around the code looking for the internals of the smaller functions.the compromise i have often found is to reformat the intermediate steps in the form of control blocks that resemble a function definitions. the pseudocode below is not a great example because, to keep it brief, the control flow is so simple that it could have been just a chain of method calls on anonymous return values. awesomenesst largerfunction(foo1 foo1, foo2 foo2) { // state the purpose of step1 resultt1 result1; // inline resultt1 step1(foo1 foo) { bar bar = barfromfoo1(foo); baz baz = bar.makebaz(); result1 = baz.awesome(); // return baz.awesome(); } // bar and baz no longer require consideration // state the purpose of step2 resultt2 result2; // inline resultt2 step2(foo2 foo) { bar bar = barfromfoo2(foo); // second bar's lifetime does not overlap with the 1st result2 = bar.awesome(); // return bar.awesome(); } return result1.howawesome(result2); } i make a point to call out out that the temp objects are scope-blocked to the minimum necessary lifetimes primarily because doing so reduces the amount of mental register space required for my brain to understand the larger function. when i see that the first bar and baz go out of existence just a few lines after they come into existence, i know i can discard them from short term memory when parsing the rest of the function. i don't get confused by the second bar. and, i don't have to check the correctness of the whole function with regards to each intermediate value.
i might be alone on this, but whenever i read things by john carmack i get a vague sense that he doesn't really get object oriented programming. he always has a lot of interesting things to say, but it also kinda reads like a c guy trying to code in c++. i'm glad his thinking keeps evolving and he's not dogmatic about anything. i'd honestly love to hear his thoughts on c++11&quot;the function that is least likely to cause a problem is one that doesn't exist, which is the benefit of inlining it.&quot;that's the equivalent of saying &quot;the faster you drive the safer you are b/c you're spending less time in danger&quot;you'll just end up with larger monster functions that are harder to manage. &quot;method c&quot; will always be a disaster for code organization b/c your commented off &quot;minorfunctions&quot; will start to bleed into each other when the interface isn't well defined.&quot; for instance, having one check in the player think code for health &lt;= 0 &amp;&amp; !killed is almost certain to spawn less bugs than having killplayer() called in 20 different places&quot;i don't completely get his example, but i see what he's saying about state and bugs that arise from that. you call a method 20 times and it has an non obvious assumption about state that can crop up at a later point - and it can be hard to track down. however the flip side is that when you do track it down, you will fix several bugs you didn't even know about.the alternative of rewriting or reengineering the same solution each time is simply awful and you'll screw up way more often
john carmack on inlined code
i might be alone on this, but whenever i read things by john carmack i get a vague sense that he doesn't really get object oriented programming. he always has a lot of interesting things to say, but it also kinda reads like a c guy trying to code in c++. i'm glad his thinking keeps evolving and he's not dogmatic about anything. i'd honestly love to hear his thoughts on c++11&quot;the function that is least likely to cause a problem is one that doesn't exist, which is the benefit of inlining it.&quot;that's the equivalent of saying &quot;the faster you drive the safer you are b/c you're spending less time in danger&quot;you'll just end up with larger monster functions that are harder to manage. &quot;method c&quot; will always be a disaster for code organization b/c your commented off &quot;minorfunctions&quot; will start to bleed into each other when the interface isn't well defined.&quot; for instance, having one check in the player think code for health &lt;= 0 &amp;&amp; !killed is almost certain to spawn less bugs than having killplayer() called in 20 different places&quot;i don't completely get his example, but i see what he's saying about state and bugs that arise from that. you call a method 20 times and it has an non obvious assumption about state that can crop up at a later point - and it can be hard to track down. however the flip side is that when you do track it down, you will fix several bugs you didn't even know about.the alternative of rewriting or reengineering the same solution each time is simply awful and you'll screw up way more often
i'm not a professional programmer and i rarely work with large code bases. so the fact that my code has drifted steadily over the years towards the large-main-function i thought was a factor of several things, first being my general amateurism. i still think that, but there are definitely other reasons too: i now use more expressive languages (python instead of c) and more expressive idioms within those languages (list comprehensions instead of while loops) and more expressive structures/libraries (numpy instead of lists of structures), so i can afford to put more in one spot. i also write smaller but more numerous programs.but there are very real advantages. i learned through game programming and still do some for fun and i absolutely prefer having a main loop that puts its fingers into all the components of the game than to have a main loop which delegates everything to mysterious entity.update()-style functions. the lack of architecture allows me to structure the logic of the game more clearly for exactly the reasons carmack outlines. everything is sequenced - what has already happened in the frame can be seen by scrolling up a bit instead of digging through a half-dozen files.but the real win here is for the beginner programmer. i strongly dislike the trend these days towards programming education being done in a &quot;fill in the blanks&quot; manner where the student takes an existing framework and writes a number of functions. the problem is that the student rarely has any idea what the framework is doing. i would rather not have beginners write games by make on_draw(), on_tick(), etc. functions but much rather have them write a for loop and have to call gfx_library_init() at program start and gfx_library_swap_buffers() at the end of a frame. that way they can say &quot;the program starts here and steps through these lines and then exits here&quot; versus having magic frameworks do the work for them. there is plenty of magic done these days behind the scenes for any beginner, but it is too much to have a completely opaque flow-control.
john carmack on inlined code
i'm not a professional programmer and i rarely work with large code bases. so the fact that my code has drifted steadily over the years towards the large-main-function i thought was a factor of several things, first being my general amateurism. i still think that, but there are definitely other reasons too: i now use more expressive languages (python instead of c) and more expressive idioms within those languages (list comprehensions instead of while loops) and more expressive structures/libraries (numpy instead of lists of structures), so i can afford to put more in one spot. i also write smaller but more numerous programs.but there are very real advantages. i learned through game programming and still do some for fun and i absolutely prefer having a main loop that puts its fingers into all the components of the game than to have a main loop which delegates everything to mysterious entity.update()-style functions. the lack of architecture allows me to structure the logic of the game more clearly for exactly the reasons carmack outlines. everything is sequenced - what has already happened in the frame can be seen by scrolling up a bit instead of digging through a half-dozen files.but the real win here is for the beginner programmer. i strongly dislike the trend these days towards programming education being done in a &quot;fill in the blanks&quot; manner where the student takes an existing framework and writes a number of functions. the problem is that the student rarely has any idea what the framework is doing. i would rather not have beginners write games by make on_draw(), on_tick(), etc. functions but much rather have them write a for loop and have to call gfx_library_init() at program start and gfx_library_swap_buffers() at the end of a frame. that way they can say &quot;the program starts here and steps through these lines and then exits here&quot; versus having magic frameworks do the work for them. there is plenty of magic done these days behind the scenes for any beginner, but it is too much to have a completely opaque flow-control.
if anyone other than carmack wrote this, i doubt it would be so well received so i'm glad he did.we all have our own programming dogma that we love and defend religiously, but we should never stop asking if our code is truthfully, objectively, clear and easy to read, prone to bugs and/or runs efficiently. &quot;best practices&quot; can get you 80% of the way there, but a developer should never stop questioning the quality of their code, even if it contradicts the sacred rules.
john carmack on inlined code
if anyone other than carmack wrote this, i doubt it would be so well received so i'm glad he did.we all have our own programming dogma that we love and defend religiously, but we should never stop asking if our code is truthfully, objectively, clear and easy to read, prone to bugs and/or runs efficiently. &quot;best practices&quot; can get you 80% of the way there, but a developer should never stop questioning the quality of their code, even if it contradicts the sacred rules.
haha, i like this quote: &quot;that was a cold-sweat moment for me. after all of my harping about latency and responsiveness, i almost shipped a title with a completely unnecessary frame of latency.&quot;
standard ml family github project
<link> in case anyone else is hitting the godaddy landing page...
interestingly enough, sml is taught in the intro class at my school. it's the first programming language i learned.
standard ml family github project
interestingly enough, sml is taught in the intro class at my school. it's the first programming language i learned.
do people still use ml? it has some historic significance as a programming language but seems completely irrelevant now (although still taught as part of the university of cambridge cs course).
standard ml family github project
do people still use ml? it has some historic significance as a programming language but seems completely irrelevant now (although still taught as part of the university of cambridge cs course).
standard ml is my favorite language, bar none. it contains so many ideas that are obviously the right thing in retrospect, like parametric polymorphism, algebraic data types, pattern matching, hindley-milner type inference, etc. the ideas added by ml's successors, like ocaml and haskell, seem much more iffy and debatable in comparison.maybe the page should also mention concurrent ml? it's basically the right solution to the problem that go is fumbling toward.
standard ml family github project
standard ml is my favorite language, bar none. it contains so many ideas that are obviously the right thing in retrospect, like parametric polymorphism, algebraic data types, pattern matching, hindley-milner type inference, etc. the ideas added by ml's successors, like ocaml and haskell, seem much more iffy and debatable in comparison.maybe the page should also mention concurrent ml? it's basically the right solution to the problem that go is fumbling toward.
the web archive gives this for the url:<link> from looking at github, it seems that the most recent intended content of the site can be seen at:<link>
show hn: 123d catch by autodesk – create 3d scans of virtually any object
there's documentation of a method to achieve the same thing using free-ish software here: <link> it uses visualsfm and meshlab, both of which came to life as testbeds for algorithm research but are now useful in their own right. by all accounts 123d catch does an excellent job, but is quite rigid in its workflow. apparently it uses the engine from acute3d's considerably more expensive smart3dcapture.sadly neither visualsfm or 123d catch are usable for commercial work because of license and copyright problems respectively.
just fyi, you must create account to have an imageset processed. this looks like it is happening server-side, so i suppose there's a reason for it.cleverly enough, it doesn't prompt you to do so until after you've taken the 10-20-however-many pictures of your object. i suspect this would make a hell of a difference over a login screen upon launch or capture.
show hn: 123d catch by autodesk – create 3d scans of virtually any object
just fyi, you must create account to have an imageset processed. this looks like it is happening server-side, so i suppose there's a reason for it.cleverly enough, it doesn't prompt you to do so until after you've taken the 10-20-however-many pictures of your object. i suspect this would make a hell of a difference over a login screen upon launch or capture.
having worked with this kind of software solutions since the ninties i must say that you should really instead look into the software photoscan from russian agisoft (www.agisoft.ru) it's alot more flexible and is very cheap ($129 usd)this was called &quot;photo fly&quot; before when it was a &quot;autodesks labs project&quot; and i belive the tech originates from when autodesk bought the french software company realviz that made the software photomodeler.
show hn: 123d catch by autodesk – create 3d scans of virtually any object
having worked with this kind of software solutions since the ninties i must say that you should really instead look into the software photoscan from russian agisoft (www.agisoft.ru) it's alot more flexible and is very cheap ($129 usd)this was called &quot;photo fly&quot; before when it was a &quot;autodesks labs project&quot; and i belive the tech originates from when autodesk bought the french software company realviz that made the software photomodeler.
is this really a show hn? did you make this?
show hn: 123d catch by autodesk – create 3d scans of virtually any object
is this really a show hn? did you make this?
so this came out a few years ago [1] and then it seemed like it just disappeared. this is the second or third thing i have seen on this in just a few days.what happened in the interim?[1] <link>
the least effective method for blocking web scraping of a website
i disagree that it was grayed out to prevent scraping. i don't have a better suggestion, but professional intuition tells me the real reason was something really mundane and dumb that couldn't be figured out by someone unfamiliar with the codebase.in the end though, i really liked that chart. how many weeks of articles did you scrape to get that data?
the author tries desperately to find logic in buzzfeed, when really it's about as bad of a website, both content and design, as it gets.
the least effective method for blocking web scraping of a website
the author tries desperately to find logic in buzzfeed, when really it's about as bad of a website, both content and design, as it gets.
it's interesting that there is an apparent discontinuity in views between 9 list items and 10 list items in the last graph.
the least effective method for blocking web scraping of a website
it's interesting that there is an apparent discontinuity in views between 9 list items and 10 list items in the last graph.
the author tried harvesting the next url from the `older` button _before_ he tried urlfragment = &quot;p=&quot; + i++; ...wut? :)
the least effective method for blocking web scraping of a website
the author tried harvesting the next url from the `older` button _before_ he tried urlfragment = &quot;p=&quot; + i++; ...wut? :)
or perhaps they block the older button because only articles before page 10 are cached.
a stolen video of my daughter went viral
title should be changed to &quot;video pulled from mom's youtube account uploaded to different youtube account goes viral&quot;
the &quot;thing the author learned&quot; that i came away with was &quot;people can steal things on the internet&quot;. the author's main point in the article was that she was not able to use the fame of her video (now re-posted) to advocate for her cause / take credit.this is certainly a problem and youtube should take action, but i fail to see why that would lead one to suddenly remove items from youtube / facebook for privacy reasons.this is an article on failed marketing, not to be confused with the privacy of her child - it's clear she doesn't care about that because she left the video public on purpose on youtube, and has written an article for the new york times.
a stolen video of my daughter went viral
the &quot;thing the author learned&quot; that i came away with was &quot;people can steal things on the internet&quot;. the author's main point in the article was that she was not able to use the fame of her video (now re-posted) to advocate for her cause / take credit.this is certainly a problem and youtube should take action, but i fail to see why that would lead one to suddenly remove items from youtube / facebook for privacy reasons.this is an article on failed marketing, not to be confused with the privacy of her child - it's clear she doesn't care about that because she left the video public on purpose on youtube, and has written an article for the new york times.
don't post pictures and videos of your children on the internet. simple.
a stolen video of my daughter went viral
don't post pictures and videos of your children on the internet. simple.
things that you have posted on youtube with the intention that people download them cannot be stolen.
a stolen video of my daughter went viral
things that you have posted on youtube with the intention that people download them cannot be stolen.
children suffer enough &quot;abuse&quot; in our society, even without the bad effects of the internet or technology in general.they don't choose their names. they don't choose their religion. they don't choose their political views. for fuck's sake at least let them grow up to the point that they can choose for themselves if they want to share content of themselves with the entire world.
code archeology – lines of code by age
which do we prefer, though? as an industry, we seem to chose the &quot;new shiney&quot; way more than the alternatives. to the point that &quot;hasn't been touched by the author in a few years&quot; is a warning sign more than a sign of completeness.
ha i just did a little rubygem to find files of which x% of the lines are more than y years old - the olde_code_finder:<link>, the first version only found files where x% of the lines were written by a particular author. then someone gently suggested checking the line age, which made perfect sense.
code archeology – lines of code by age
ha i just did a little rubygem to find files of which x% of the lines are more than y years old - the olde_code_finder:<link>, the first version only found files where x% of the lines were written by a particular author. then someone gently suggested checking the line age, which made perfect sense.
if you adapted this to wikipedia articles (a very nontrivial task) you could highlight &quot;authority&quot;: how long has a fragment of an article withstood editors.
code archeology – lines of code by age
if you adapted this to wikipedia articles (a very nontrivial task) you could highlight &quot;authority&quot;: how long has a fragment of an article withstood editors.
phabricator offers this functionality via it's blame feature and shades of green<link>
code archeology – lines of code by age
phabricator offers this functionality via it's blame feature and shades of green<link>
&gt; remember - the lighter the color, the older the code.i really think it should be the opposite. recent code should be clearly visible and older code should progressively interpolate into black. the current setup is extremely unintuitive for me.
iphone 6: comparing invensense and bosch accelerometers
<link> is the source url, the eetimes article is a repost
invn was my former employer (~2013), so i'm quite familiar with the details of both of these chips. i'm not sure just how much i am allowed to talk about these parts, but imo the general consensus regarding power consumption being the killer feature that got bosch the secondary accelerometer socket win seems correct to me. the likely accel only features are screen orientation, pedometer, and activity recognition, as the articles suggest. (i personally don't think that the 1ms vs 20ms start-up time matters much though)also, (a) st microelectronics parts always had quality issues so i'm not surprised that they lost the socket to bosch, and (b) i expect bosch to be selling the bma280 at cost or even at a loss. bosch was a late entrant to the consumer electronics accel/gyro world, and has always been keen on price dumping to win sockets and market share.edit: the more i think about this, the more i think that this is a situation where st was the only loser. they had both the accel and gyro sockets before, but essentially lost the gyro socket to invn and accel socket to bosch. the integrated accel in the mpu-6700 is more of an add-on feature for low power on-chip sensor fusion that was not capable with the st solution that was previously used (the dmp in the 6700 pre-dates the apple m7 chip that was touted for low power activity tracking usage, but given that apple has the m7, i'm not sure how much of its capabilities are being used. my guess would be that it is only the on-chip 6-axis sf that they are using). the st socket was simply taken by bosch.
iphone 6: comparing invensense and bosch accelerometers
invn was my former employer (~2013), so i'm quite familiar with the details of both of these chips. i'm not sure just how much i am allowed to talk about these parts, but imo the general consensus regarding power consumption being the killer feature that got bosch the secondary accelerometer socket win seems correct to me. the likely accel only features are screen orientation, pedometer, and activity recognition, as the articles suggest. (i personally don't think that the 1ms vs 20ms start-up time matters much though)also, (a) st microelectronics parts always had quality issues so i'm not surprised that they lost the socket to bosch, and (b) i expect bosch to be selling the bma280 at cost or even at a loss. bosch was a late entrant to the consumer electronics accel/gyro world, and has always been keen on price dumping to win sockets and market share.edit: the more i think about this, the more i think that this is a situation where st was the only loser. they had both the accel and gyro sockets before, but essentially lost the gyro socket to invn and accel socket to bosch. the integrated accel in the mpu-6700 is more of an add-on feature for low power on-chip sensor fusion that was not capable with the st solution that was previously used (the dmp in the 6700 pre-dates the apple m7 chip that was touted for low power activity tracking usage, but given that apple has the m7, i'm not sure how much of its capabilities are being used. my guess would be that it is only the on-chip 6-axis sf that they are using). the st socket was simply taken by bosch.
the invensense has an autonomous mode (on android devices it can count steps without bothering the host cpu) -- does the bosch do this too?apple have a separate cortex-m3 which is branded as the &quot;m7&quot; or &quot;m8&quot; motion processor. is this what they use for step counting when the a8 is suspended? why not use the invensense for that? is the m8 + bosch less power?
iphone 6: comparing invensense and bosch accelerometers
the invensense has an autonomous mode (on android devices it can count steps without bothering the host cpu) -- does the bosch do this too?apple have a separate cortex-m3 which is branded as the &quot;m7&quot; or &quot;m8&quot; motion processor. is this what they use for step counting when the a8 is suspended? why not use the invensense for that? is the m8 + bosch less power?
a pipe dream here would be to use both accelerometers at the same time and filter out much of the random noise resulting in much smoother accelerometer readings that could be used to calculate displacement.(goes off and downloads apple coremotion samples to see if they do this already)
iphone 6: comparing invensense and bosch accelerometers
a pipe dream here would be to use both accelerometers at the same time and filter out much of the random noise resulting in much smoother accelerometer readings that could be used to calculate displacement.(goes off and downloads apple coremotion samples to see if they do this already)
civil engineering grad student here, bma280 are much more stable and low noise than other sensor out there it features noise density of 120 μg/√hz. not sure about mpu-6700 but mpu-6500 is 250µg/√hz. in my research on using consumer mems sensor to measure inclination, 16-bit resolution on mpu-6500 does not help a lot if the noise is much higher.
midnight.js: a jquery plugin to switch headers based on the content below
very nice looking landing page. may i ask how you make the scroll so smooth?
the use of the word &quot;header&quot; here was a little confusing. when i read the (hn) title i thought it was referring to http headers (i know it doesn't make sense...thought maybe i was missing something). then when i first looked at the page i figured it was referring to the h[1-6] elements (and thus thought the library was pretty dumb). only after that did i figure out what it was actually referring to.that said...i don't really have a better word to use. also at least some of the confusion may have been due to the hn headline not having the word &quot;fixed&quot; in it (the actually project does, but i didn't re-read the title).
midnight.js: a jquery plugin to switch headers based on the content below
the use of the word &quot;header&quot; here was a little confusing. when i read the (hn) title i thought it was referring to http headers (i know it doesn't make sense...thought maybe i was missing something). then when i first looked at the page i figured it was referring to the h[1-6] elements (and thus thought the library was pretty dumb). only after that did i figure out what it was actually referring to.that said...i don't really have a better word to use. also at least some of the confusion may have been due to the hn headline not having the word &quot;fixed&quot; in it (the actually project does, but i didn't re-read the title).
can you just use angular?
midnight.js: a jquery plugin to switch headers based on the content below
can you just use angular?
when i stop midway between sections the header has two styles at once, how did you work that out?
midnight.js: a jquery plugin to switch headers based on the content below
when i stop midway between sections the header has two styles at once, how did you work that out?
every cool new thing targetting the browser uses jquery. how this relates to using a big framework like react?
an unobstructed view
&gt; bet you, like everyone else, assumed that if the dealer put his advertising frame on your plate, it must be lawful.i did not assume that, and i reject that &quot;everyone else&quot; did.
when a petty law is frequently violated and overlooked, law enforcement officers can use it as a tool to confront a citizen who would be otherwise entirely innocent. often other underlying suspicions motivate investigation of citizens, and these laws are simply used as a pretense. sometimes these suspicions have merit, but it seems like officers too often use these enforcements as a basis to act out their own prejudices.if an arbitrarily-enforced law has fundamental merit, then the enforcement is the problem. as much as i shudder at the thought, it sounds like the solution here is complete enforcement of every law. the hope would be that the ensuing uproar would cause removal or revision to the laws to ease up the resulting oppression.
an unobstructed view
when a petty law is frequently violated and overlooked, law enforcement officers can use it as a tool to confront a citizen who would be otherwise entirely innocent. often other underlying suspicions motivate investigation of citizens, and these laws are simply used as a pretense. sometimes these suspicions have merit, but it seems like officers too often use these enforcements as a basis to act out their own prejudices.if an arbitrarily-enforced law has fundamental merit, then the enforcement is the problem. as much as i shudder at the thought, it sounds like the solution here is complete enforcement of every law. the hope would be that the ensuing uproar would cause removal or revision to the laws to ease up the resulting oppression.
in general, once there are enough laws that a person cannot know all of the laws that apply to them, selective enforcement of laws, and hence corruption, is inevitable.
an unobstructed view
in general, once there are enough laws that a person cannot know all of the laws that apply to them, selective enforcement of laws, and hence corruption, is inevitable.
if i were him, i'd file a claim against the dealer. but again, it is weird (and scary) how some laws like this are arbitrarily enforced.
an unobstructed view
if i were him, i'd file a claim against the dealer. but again, it is weird (and scary) how some laws like this are arbitrarily enforced.
in what way is this keeping the peace or beneficial to the public?actions like the above are why many citizens do not respect the laws and those who apply them; because they are not acting in a fittingly honorable manor.
why the z-80's data pins are scrambled
amazing analysis. another reminder we're all standing on the shoulders of giants every time we whip out our phones carrying billions upon billion of gates...
so, i'm thinkingi think to connect a memory chip you just don't care and you can swap them as you want (as long as you connect the 8 data pins to 8 data pins in the memory)for io you care, of course, or you &quot;just&quot; shuffle all data that you want to write (which is a sure way of making someone go crazy)
why the z-80's data pins are scrambled
so, i'm thinkingi think to connect a memory chip you just don't care and you can swap them as you want (as long as you connect the 8 data pins to 8 data pins in the memory)for io you care, of course, or you &quot;just&quot; shuffle all data that you want to write (which is a sure way of making someone go crazy)
there was a jpl probe years ago (can't remember which, and can't seem to find a reference) that had a radiation hardened memory ic with error correcting codes and a system to detect and correct the bit flips that were expected due to cosmic rays.after launch, the number of unrecoverable errors (due to multiple bits flipped within the same codeword) was higher than expected. it turned out that someone had swapped some combination of address or data lines, which ended up changing the physical grouping of bits within the codewords. some of the bits within a logical codeword were so close together that a single event was able flip both of them, causing the error correction to fail.
why the z-80's data pins are scrambled
there was a jpl probe years ago (can't remember which, and can't seem to find a reference) that had a radiation hardened memory ic with error correcting codes and a system to detect and correct the bit flips that were expected due to cosmic rays.after launch, the number of unrecoverable errors (due to multiple bits flipped within the same codeword) was higher than expected. it turned out that someone had swapped some combination of address or data lines, which ended up changing the physical grouping of bits within the codewords. some of the bits within a logical codeword were so close together that a single event was able flip both of them, causing the error correction to fail.
back in the days of hand made printed circuits, i randomly assigned both data and address pins on a microprocessor circuit, and got everything onto a single side with just one or two jumpers.i felt so clever. then i remembered that the program in the rom assumed a particular bit numbering, literally while my board was bubbling away in the ferric chloride. oops.rather than re-design the board, i thought about writing a program to rearrange my binaries, or make a socket adapter for the eeprom programmer. the socket adapter won out.
why the z-80's data pins are scrambled
back in the days of hand made printed circuits, i randomly assigned both data and address pins on a microprocessor circuit, and got everything onto a single side with just one or two jumpers.i felt so clever. then i remembered that the program in the rom assumed a particular bit numbering, literally while my board was bubbling away in the ferric chloride. oops.rather than re-design the board, i thought about writing a program to rearrange my binaries, or make a socket adapter for the eeprom programmer. the socket adapter won out.
the motivation behind splitting the data bus is to allow the chip to perform activities in parallel. for instance an instruction can be read from the data pins into the instruction logic at the same time that data is being copied between the alu and registers.essentially pipelining, several years before the risc movement popularised it? could the z80 have been one of the first pipelined single-chip cpus?that was a very interesting article. i've tried staring at the visual6502 chip images for a long time, and although i understand the principles behind how diffusion/polysilicon/metal layers are put together to form transistors, for some reason i feel absolutely lost trying to follow the connections and find the borders between the regions especially when one layer is hidden beneath another.even looking at the nor gate with its layout side-by-side i can't see much beyond the metal layer, despite it being partially transparent. i have no problems with transistor-level schematics, however. is there some sort of trick to being able to easily read and follow the circuitry in die images and layout-level diagrams? it's like some people can read these and visualise/draw the schematic immediately.
systemd: the biggest fallacies
he's missing the point about socket activation. the problem it solves is different: if you have daemon b that depends on daemon a, you want to express that dependency in the init system, so that it starts a before b. but just starting process a isn't enough to ensure that a is actually listening on the socket that b wants to connect to, since it takes some time to load the binary and do initialization. there's a race condition there, where b can try to connect before a binds to the socket. to fix the race, the init system needs to either monitor for the availability of the socket, or do it systemd-style where it opens the socket itself.
some of his arguments look a little bit botched.for example: &gt; fallacy #1: &quot;systemd is multiple binaries, therefore it is not monolithic&quot;well following wikipedia (<link> means either a non-modular application, or a self-contained application. in this sense either his counter-argument is simply wrong, or the examples list he gives at the and is wrong.&gt; fallacy #4.1: &quot;unit files reduce complexity&quot;here he compares the ca. 275.000 loc of systemd with the 10.000 loc of shell scripts used as initscripts on debian. lets ignore that the 275.000 loc contains much more than the unit management in the systemd daemon. but why are the bugs always in the scripts and never in the shell as he claims? and sorry, i don't take his word that c is always more error prone than the shell. this is true when the shell is used for its intended purpose: to start commands, pipe btw them and having a little bit of flow control. for everything else you surely want to have a general purpose language.&gt; fallacy #7: &quot;systemd gives you socket activation!&quot;well, according to boot charts, socket activation is not only a marketing gimmick.edit: add fallacy 7
systemd: the biggest fallacies
some of his arguments look a little bit botched.for example: &gt; fallacy #1: &quot;systemd is multiple binaries, therefore it is not monolithic&quot;well following wikipedia (<link> means either a non-modular application, or a self-contained application. in this sense either his counter-argument is simply wrong, or the examples list he gives at the and is wrong.&gt; fallacy #4.1: &quot;unit files reduce complexity&quot;here he compares the ca. 275.000 loc of systemd with the 10.000 loc of shell scripts used as initscripts on debian. lets ignore that the 275.000 loc contains much more than the unit management in the systemd daemon. but why are the bugs always in the scripts and never in the shell as he claims? and sorry, i don't take his word that c is always more error prone than the shell. this is true when the shell is used for its intended purpose: to start commands, pipe btw them and having a little bit of flow control. for everything else you surely want to have a general purpose language.&gt; fallacy #7: &quot;systemd gives you socket activation!&quot;well, according to boot charts, socket activation is not only a marketing gimmick.edit: add fallacy 7
there is a lot to digest, but so far this this looks like an excellent example of critical thinking, regardless of one's feelings or intuition regarding systemd. both proponents and detractors have much to be thankful for in this article.
systemd: the biggest fallacies
there is a lot to digest, but so far this this looks like an excellent example of critical thinking, regardless of one's feelings or intuition regarding systemd. both proponents and detractors have much to be thankful for in this article.
first of all, thanks for linking to uselessd.the writeup was quite nice. i was actually in the process of writing my own notes to respond to poettering's &quot;the biggest myths&quot;, but your approach is better. i'll definitely use it as a reference to link to in discussions.that said, i have a little caveat for #9. though systemd violating kiss is virtually undeniable, you should reword it so as to point it out on systemd's own merits, not in relation to sysvinit, which systemd explicitly intends to be more complex than.
systemd: the biggest fallacies
first of all, thanks for linking to uselessd.the writeup was quite nice. i was actually in the process of writing my own notes to respond to poettering's &quot;the biggest myths&quot;, but your approach is better. i'll definitely use it as a reference to link to in discussions.that said, i have a little caveat for #9. though systemd violating kiss is virtually undeniable, you should reword it so as to point it out on systemd's own merits, not in relation to sysvinit, which systemd explicitly intends to be more complex than.
some of these are arguments i'm glad to see getting more attention, such as fallacy #1: &quot;systemd is multiple binaries, therefore it is not monolithic&quot;.others strike me as a stretch. for example, fallacy #4.1: &quot;unit files reduce complexity&quot;. no, i don't want the least complex init system init system possible. i think it's obvious to people who have written both system v init scripts and systemd or upstart configurations that the latter are dramatic improvements. the chance that i will write a buggy init script is, unfortunately, high. the chance that i will need to debug systemd when writing a unit file is very low.
ask hn: are there any hackathons for non-students and non-graduate-students?
there is far too many hackathons for non-students, such as angelhack, battlehack, startup weekend (arguable to whether an hackathon) amongst many more.a lot of student hackathons accept dropouts too on a case-by-case basis. but there's hackathons for all :)
check meetup.com, challenge post, and hacker league. hackathons are taking over the world.
ask hn: are there any hackathons for non-students and non-graduate-students?
check meetup.com, challenge post, and hacker league. hackathons are taking over the world.
yes. i see lots of hackathons all the time that don't care about student status. have you tried to find them in your area and failed?
ask hn: are there any hackathons for non-students and non-graduate-students?
yes. i see lots of hackathons all the time that don't care about student status. have you tried to find them in your area and failed?
if you're in the chicago area, you might be interested in an upcoming civic hackathon that we're hosting.<link>
ask hn: are there any hackathons for non-students and non-graduate-students?
if you're in the chicago area, you might be interested in an upcoming civic hackathon that we're hosting.<link>
location context would also help in this situation, i'm sure there are local hackathons that'd allow anyone.also, make sure you check out the startup digest (<link> for your location to see what hackathons are happening.
legendary phreaker john draper / captain crunch needs our help hey guys,<p>i believe a lot of people can relate to the legend, john draper - captain crunch. he's being really ill lately and has undergone multiple surgeries! his insurance is almost gone and they won't cover the cost of post op medication, which ran out a week ago.<p>please everyone, let's try to help him! the campaign has been verified by john himself <link><p>about john from wikipedia:<p>&quot;john thomas draper (born 1943), also known as captain crunch, crunch or crunchman (after cap'n crunch, the mascot of a breakfast cereal), is an american computer programmer and former phone phreak. he is a legendary figure within the computer programming world and the hacker and security community. draper has long maintained a nomadic lifestyle;[1] as of may 2013, he resides in las vegas, nevada.[2]&quot;<p>it's not much he needs, and it would be wonderful if anyone can help!<p>bbc article: <link><p>thank you! bellow is the fundraiser:<p><link>
oh man, when i lived in los angeles, john used to come to the monthly santa monica django meetups. he even came to a few of the pyladies events put on by the wonderful daniel greenfield and audrey greenfield (roy at the time).he was a bit surly, but great people, and an inspiration for a generation of phreakers. i started out blue boxing with an acoustic coupler a loooooong time ago.contributor++
thanks for the tip; donated. this guy was my heroes' hero, around the time when i found my first copy of 2600.
legendary phreaker john draper / captain crunch needs our help hey guys,<p>i believe a lot of people can relate to the legend, john draper - captain crunch. he's being really ill lately and has undergone multiple surgeries! his insurance is almost gone and they won't cover the cost of post op medication, which ran out a week ago.<p>please everyone, let's try to help him! the campaign has been verified by john himself <link><p>about john from wikipedia:<p>&quot;john thomas draper (born 1943), also known as captain crunch, crunch or crunchman (after cap'n crunch, the mascot of a breakfast cereal), is an american computer programmer and former phone phreak. he is a legendary figure within the computer programming world and the hacker and security community. draper has long maintained a nomadic lifestyle;[1] as of may 2013, he resides in las vegas, nevada.[2]&quot;<p>it's not much he needs, and it would be wonderful if anyone can help!<p>bbc article: <link><p>thank you! bellow is the fundraiser:<p><link>
thanks for the tip; donated. this guy was my heroes' hero, around the time when i found my first copy of 2600.
guys sorry, first time poster here... links aren't directly linked. here: <link>
legendary phreaker john draper / captain crunch needs our help hey guys,<p>i believe a lot of people can relate to the legend, john draper - captain crunch. he's being really ill lately and has undergone multiple surgeries! his insurance is almost gone and they won't cover the cost of post op medication, which ran out a week ago.<p>please everyone, let's try to help him! the campaign has been verified by john himself <link><p>about john from wikipedia:<p>&quot;john thomas draper (born 1943), also known as captain crunch, crunch or crunchman (after cap'n crunch, the mascot of a breakfast cereal), is an american computer programmer and former phone phreak. he is a legendary figure within the computer programming world and the hacker and security community. draper has long maintained a nomadic lifestyle;[1] as of may 2013, he resides in las vegas, nevada.[2]&quot;<p>it's not much he needs, and it would be wonderful if anyone can help!<p>bbc article: <link><p>thank you! bellow is the fundraiser:<p><link>
guys sorry, first time poster here... links aren't directly linked. here: <link>
not to disrespect, but at 71 isn't he on medicare? that and co-insurance won't cover his medical treatments?
legendary phreaker john draper / captain crunch needs our help hey guys,<p>i believe a lot of people can relate to the legend, john draper - captain crunch. he's being really ill lately and has undergone multiple surgeries! his insurance is almost gone and they won't cover the cost of post op medication, which ran out a week ago.<p>please everyone, let's try to help him! the campaign has been verified by john himself <link><p>about john from wikipedia:<p>&quot;john thomas draper (born 1943), also known as captain crunch, crunch or crunchman (after cap'n crunch, the mascot of a breakfast cereal), is an american computer programmer and former phone phreak. he is a legendary figure within the computer programming world and the hacker and security community. draper has long maintained a nomadic lifestyle;[1] as of may 2013, he resides in las vegas, nevada.[2]&quot;<p>it's not much he needs, and it would be wonderful if anyone can help!<p>bbc article: <link><p>thank you! bellow is the fundraiser:<p><link>
not to disrespect, but at 71 isn't he on medicare? that and co-insurance won't cover his medical treatments?
not able to contribute at this time but i upvoted and shared to as many people as i could. i remember reading about him 20+ years ago and being very inspired. looks like they will make the goal, very nice.
rockstor, a linux and btrfs based nas solution
interesting =).i'm currently running freenas with zfs.would be curious to see how this compares.the one thing missing for me on freenas is some kind of file search/indexing feature.i wonder if the fact that this is linux based will make adding something like that easier.
the gui looks pretty cool. personally i would not trust btrfs for a nas. i have made not the best experience while running various production servers with btrfs. i switched (back) to zfs and never looked back, it its just better in every regard.i also administer a freenas box for a small business and this stuff is rock solid, i would only wish a _easy_ solution to get the permission stuff right in a multi user setting.none the less, thumbs up for creating this, cool stuff!
rockstor, a linux and btrfs based nas solution
the gui looks pretty cool. personally i would not trust btrfs for a nas. i have made not the best experience while running various production servers with btrfs. i switched (back) to zfs and never looked back, it its just better in every regard.i also administer a freenas box for a small business and this stuff is rock solid, i would only wish a _easy_ solution to get the permission stuff right in a multi user setting.none the less, thumbs up for creating this, cool stuff!
my very first questions regarding a potential storage solution revolve around data loss: 1. can we enumerate the data loss scenarios? 2. how is drive failure handled? 3. how may data be corrupted and such corruption detected? 4. for every data loss scenario, what is the recovery procedure? here is all i could find: <link> course, there is a wealth of information on such questions for standard raid, but i would suggest for marketing purposes that rockstor synthesize available information (from the many relevant layers of data management) in a coherent fashion, specific to their product. it doesn't have to be deep, but it should be at least minimally comprehensive and broad, with pointers to more detailed, layer-specific information.also, it's fine if the recovery scenario is &quot;restore from backup&quot; for e.g. the scenario where data is deleted by an authorized user. if so, there should be at least a minimal &quot;backup story&quot;.
rockstor, a linux and btrfs based nas solution
my very first questions regarding a potential storage solution revolve around data loss: 1. can we enumerate the data loss scenarios? 2. how is drive failure handled? 3. how may data be corrupted and such corruption detected? 4. for every data loss scenario, what is the recovery procedure? here is all i could find: <link> course, there is a wealth of information on such questions for standard raid, but i would suggest for marketing purposes that rockstor synthesize available information (from the many relevant layers of data management) in a coherent fashion, specific to their product. it doesn't have to be deep, but it should be at least minimally comprehensive and broad, with pointers to more detailed, layer-specific information.also, it's fine if the recovery scenario is &quot;restore from backup&quot; for e.g. the scenario where data is deleted by an authorized user. if so, there should be at least a minimal &quot;backup story&quot;.
the only data protection options i could find were raid 1, and 10. (raid 0 is a performance option) and as data loss on attempting to re-silver a 3tb mirror is 1 in 5, data protection here is not enterprise quality yet).the ui stuff is great, but the tricky bit about building a storage system is not provisioning it, or getting the access protocols right, it is all about finding all the ways that data can be destroyed (both silently and noisily) and guarding against them. so if you want to stick with the enterprise target, then you need something like the zfs on linux page which describes every way you can get data zapped and how you will prevent that from happening.if you want to be just an off the shelf &quot;hey here's something that will make your access point into something like a nas device.&quot; then you get to lose data when a disk goes bad, or a memory chip goes bad, or a network cable is loose, or the powersupply cuts out, or the cat knocks it off the table etc.
rockstor, a linux and btrfs based nas solution
the only data protection options i could find were raid 1, and 10. (raid 0 is a performance option) and as data loss on attempting to re-silver a 3tb mirror is 1 in 5, data protection here is not enterprise quality yet).the ui stuff is great, but the tricky bit about building a storage system is not provisioning it, or getting the access protocols right, it is all about finding all the ways that data can be destroyed (both silently and noisily) and guarding against them. so if you want to stick with the enterprise target, then you need something like the zfs on linux page which describes every way you can get data zapped and how you will prevent that from happening.if you want to be just an off the shelf &quot;hey here's something that will make your access point into something like a nas device.&quot; then you get to lose data when a disk goes bad, or a memory chip goes bad, or a network cable is loose, or the powersupply cuts out, or the cat knocks it off the table etc.
all three of the server hardware suggestions are discontinued.
quit covering up your toxic hellstew with docker
i read the opening paragraph and thought 'ah well, another boring didactic angry developer rant.' just as i was about to close the tab, my eye caught the start of the second paragraph:this reminds me of my days in the space shuttle program.which, to put it mildly, is something of a credibility boost. so i finished the article.
i have been working recently with a php application whose installation instructions are &quot;run this virtualbox image inside your network, and without a proxy in front of it because we are not properly configured&quot;.this php application is not trivial, but also not very complex. yet its developers do not provide this application as a pear package, and replied that this kind of things are superfluous these days, vm are simpler.people like these developers fail to understand that there is a lot more in a vm than just their software (from the kernel to all the exposed services) and that by distributing a vm they are also becoming the maintainers of a very complex set of dependencies. not that they care: it took them two months to release a vm not vulnerable to heartbleed. let's see how long before they release a vm not vulnerable to shellshock.
quit covering up your toxic hellstew with docker
i have been working recently with a php application whose installation instructions are &quot;run this virtualbox image inside your network, and without a proxy in front of it because we are not properly configured&quot;.this php application is not trivial, but also not very complex. yet its developers do not provide this application as a pear package, and replied that this kind of things are superfluous these days, vm are simpler.people like these developers fail to understand that there is a lot more in a vm than just their software (from the kernel to all the exposed services) and that by distributing a vm they are also becoming the maintainers of a very complex set of dependencies. not that they care: it took them two months to release a vm not vulnerable to heartbleed. let's see how long before they release a vm not vulnerable to shellshock.
this makes sense, i work in a pretty convoluted sharepoint environment. it is completely impossible to spin up a development environment without dozens of scripts and knowing exactly what lists to manually create and what data must be present inside of them.this means that new-hires are handed a cryptic and seriously out-of-date document with instructions on how to setup a proper vm environment.they check out their code.they deploy...but wait the deploy fails because of missing data, missing document types, missing lists...open up sharepoint, enable some doc types not turned on by default, turn on some more features, add a list. deploy again, no wait it died again, oh now it's a different doc type and a different feature that the deployment doesn't turn on by default ...etcthe end result is a mess that requires days to get up and running, not hours.
quit covering up your toxic hellstew with docker
this makes sense, i work in a pretty convoluted sharepoint environment. it is completely impossible to spin up a development environment without dozens of scripts and knowing exactly what lists to manually create and what data must be present inside of them.this means that new-hires are handed a cryptic and seriously out-of-date document with instructions on how to setup a proper vm environment.they check out their code.they deploy...but wait the deploy fails because of missing data, missing document types, missing lists...open up sharepoint, enable some doc types not turned on by default, turn on some more features, add a list. deploy again, no wait it died again, oh now it's a different doc type and a different feature that the deployment doesn't turn on by default ...etcthe end result is a mess that requires days to get up and running, not hours.
this reminds me of the &quot;every dev should be senior&quot; mindset that is all too common in this industry.i don't see how supposing every devops specialist is replaceable by average engineers is a real solution.
quit covering up your toxic hellstew with docker
this reminds me of the &quot;every dev should be senior&quot; mindset that is all too common in this industry.i don't see how supposing every devops specialist is replaceable by average engineers is a real solution.
yes, simplicity leads to understanding and i don't understand why more people don't get this simple concept. i've dealt with codebases with such a horrendous build process that it doesn't matter what kind of sugar you sprinkle on top because making any change is practically impossible. that complexity has to live somewhere and if you offload it to the docker buildfile it's still in the buildfile. the problem at the end of the day comes down to the fact that most developers either don't understand enough to build proper build pipelines or they are lazy or they don't think complexity in the build pipeline is anything to worry about. docker does not change those things.
iphone 6 and 6 plus not as bendy as believed
christ, will the damn thing deform if i keep it in my pocket like i would any other phone or is it just a minuscule percentage of devices that have had photos re-posted in the lolomgwtf nature of these inane corporate allegiance squabbles? that is all i care about, i'm not trying to build a damn house out of these things.
my biggest issue with apple's response is they have not revealed their policy on what will happen if i come into apple care with a malformed phone. will they blame me for putting it in my pocket wrong...or assure me that they can either replace it or give me some kind of case that lessens the risk of curving
iphone 6 and 6 plus not as bendy as believed
my biggest issue with apple's response is they have not revealed their policy on what will happen if i come into apple care with a malformed phone. will they blame me for putting it in my pocket wrong...or assure me that they can either replace it or give me some kind of case that lessens the risk of curving
almost all the bending reports are for the iphone 6+ bending &amp; twisting at/close to the volume-down button when placed in pockets etc.placing the phone between two flat blocks and applying pressure at the exact center (three-point-test) will not test for this specific issue.even then it should be pretty suspicious that in the test conducted, the new phones are at the bottom of the stiffness list.on another front, the apple response is textbook - deny, minimize, (deride the press), then grudgingly make changes even while insisting that none were needed to begin with. expect the next lot of these phones to experience a sudden stiffening.
iphone 6 and 6 plus not as bendy as believed
almost all the bending reports are for the iphone 6+ bending &amp; twisting at/close to the volume-down button when placed in pockets etc.placing the phone between two flat blocks and applying pressure at the exact center (three-point-test) will not test for this specific issue.even then it should be pretty suspicious that in the test conducted, the new phones are at the bottom of the stiffness list.on another front, the apple response is textbook - deny, minimize, (deride the press), then grudgingly make changes even while insisting that none were needed to begin with. expect the next lot of these phones to experience a sudden stiffening.
phone deformation case separation ----------------------------------------------------- htc one (m8) 70 lbs. 90 lbs. apple iphone 6 70 lbs. 100 lbs. apple iphone 6 plus 90 lbs. 110 lbs. lg g3 130 lbs. 130 lbs. apple iphone 5 130 lbs. 150 lbs. samsung galaxy note 3 150 lbs. 150 lbs. direct link to the video <link>
iphone 6 and 6 plus not as bendy as believed
phone deformation case separation ----------------------------------------------------- htc one (m8) 70 lbs. 90 lbs. apple iphone 6 70 lbs. 100 lbs. apple iphone 6 plus 90 lbs. 110 lbs. lg g3 130 lbs. 130 lbs. apple iphone 5 130 lbs. 150 lbs. samsung galaxy note 3 150 lbs. 150 lbs. direct link to the video <link>
this test shows a severe lack of understanding of flexural strength testing. this test is a bastardization of the three point bending test that is commonly performed for peak bending strength analysis of materials.bending is a result of moments and moments are driven by moment arms. these tests should not be looking at point load required to induce deformation but rather the moment required to induce deformation. each of these should have been converted into equivalent moments based on the size of the phone.for example, the m8 is shown as 146.3 mm (5.76 in) h; 70.6 mm (2.78 in) w; 9.4 mm (0.37 in) d and the iphone 6 is 138.1 x 67 x 6.9 mm (5.44 x 2.64 x 0.27 in) and both are shown to &quot;deform&quot; at 70 lbs.the m8 has an induced moment of 100 lb-in while the iphone 6 has an induced moment of 95 lb-in. the resultant bending stress by assuming a linear stress distribution across the section is then 1575 psi for the m8 and 2950 psi for the iphone 6.similar analyses should be undertaken for each of the other phones.a more accurate test for the failure mode of concern would be a four point load test in which moment is constant across a portion of the phone. the three point load test induces a moment that is maximum at the point of load application. the four point load test is more likely to show you where the point of weakness in the phone is.
for shanghai jobs, only ‘normal size’ need apply
i really appreciate the extent to which american laws make it difficult for employers to eliminate candidates based on factors that are irrelevant to the job.in 2012, i considered applying for a fellowship in israel. the application form required you to say whether you had consulted a mental health professional in the last two years and to list any medications you were currently taking and for what purpose.in the united states, it is completely illegal to put questions like this on a job application form (and for very good reason). you are allowed to ask these types of questions after you've made an offer and even then you have to demonstrate that the person cannot do the job in order to rescind the offer.i asked the organization why they collected this information, and they told me it was for the safety of the participants and that they &quot;couldn't in good conscience&quot; not ask these questions. i didn't apply for the fellowship.
i remember seeing chinese job postings on indeed.com that included not only physical measurements for a receptionist position, but also attractiveness requirements. it totally surprised us (we were in the us), but i guess it's a cultural thing? then i realized that 60 years ago, us job postings probably had the same kinds of requirements.
for shanghai jobs, only ‘normal size’ need apply
i remember seeing chinese job postings on indeed.com that included not only physical measurements for a receptionist position, but also attractiveness requirements. it totally surprised us (we were in the us), but i guess it's a cultural thing? then i realized that 60 years ago, us job postings probably had the same kinds of requirements.
a couple years ago, i spent a year as an english teacher in guangxi province, china. hiring on the basis of physical or racial characteristics was very blatant in that area and industry. it's difficult for non-white laowai to get esl positions, because many training centers choose not to hire those who don't &quot;look like a native english speaker&quot; -- either because the school administrators believe this, or they think it'll drive away parents/customers who think their child isn't getting a &quot;real education&quot; from a &quot;real english-speaking foreigner.&quot;it was a new and very uncomfortable feeling to be so blatantly valued (and used in marketing) for being little more than a white-faced billboard by the school/business i was working at.
for shanghai jobs, only ‘normal size’ need apply
a couple years ago, i spent a year as an english teacher in guangxi province, china. hiring on the basis of physical or racial characteristics was very blatant in that area and industry. it's difficult for non-white laowai to get esl positions, because many training centers choose not to hire those who don't &quot;look like a native english speaker&quot; -- either because the school administrators believe this, or they think it'll drive away parents/customers who think their child isn't getting a &quot;real education&quot; from a &quot;real english-speaking foreigner.&quot;it was a new and very uncomfortable feeling to be so blatantly valued (and used in marketing) for being little more than a white-faced billboard by the school/business i was working at.
the same thing happens in all of asia. in japan, you have to apply with a picture on your resume, and hiring managers will ask married women if they intend to get pregnant, because they won't hire them. i'm sure it's similar across other countries. my friend, who is white, and his wife who is japanese moved to japan and after 9 months came back to the us because the conditions were so bad compared to the us. not only do they work you to the bone, the pay is incredibly low and you are subject to blatant sexism and racism.
for shanghai jobs, only ‘normal size’ need apply
the same thing happens in all of asia. in japan, you have to apply with a picture on your resume, and hiring managers will ask married women if they intend to get pregnant, because they won't hire them. i'm sure it's similar across other countries. my friend, who is white, and his wife who is japanese moved to japan and after 9 months came back to the us because the conditions were so bad compared to the us. not only do they work you to the bone, the pay is incredibly low and you are subject to blatant sexism and racism.
not surprise to read this. the chinese culture is a very racially discriminatory culture. not only are suitable candidates for jobs selected based upon physical attributes and racial backgrounds, social interactions are also racially based, i am talking simple things like having sharing a table at a crowded eating place.even within the chinese, they are separated into mandarin, cantonese, hakka etc, each with their own cultural character.thankfully, the younger generations are starting to change this mindset as the world gets smaller via the internet and fast air travel. however, with hiring practices such as that mentioned in the post, it will may take a couple of generations before we see a significant change.
student course evaluations get an 'f'
the way things work at my university, at least the science faculty, is that word of good or bad teachers just travels trough the grapevine. and it is usually quite accurate.then we have (a active, and paid) representation in the faculties decision making body, which leads to, in my experience, the faculty actually dealing with bad professors.not everything has to be boxed in by numerics, sometimes simply speaking up and listening is the easiest and best solution.
when i was at columbia, we used an interesting website called culpa (&quot;columbia undergraduate listing of professor ability&quot;; <link> the site is interesting is that unlike many other sites, students can only give written evaluations and there are no numerical scores. reviews are generally thoughtful and, in some cases, were particularly useful in choosing a professor for a course.
student course evaluations get an 'f'
when i was at columbia, we used an interesting website called culpa (&quot;columbia undergraduate listing of professor ability&quot;; <link> the site is interesting is that unlike many other sites, students can only give written evaluations and there are no numerical scores. reviews are generally thoughtful and, in some cases, were particularly useful in choosing a professor for a course.
the methodology used in the study from italy[1] mentioned in the article was quite interesting, because it had actual random assignment of comparable groups of students to different instructors. the students were then followed up over their further study in the same university. that's a good kind of data set to have for examining issues like this.the alternative means of evaluating teachers mentioned in the article are also quite reasonable. both peer review and content analysis of instructor-prepared materials can identify better teachers with different sources of bias in the evaluation, allowing a triangulation with student ratings.here on hacker news, from another participant, i learned about a more rigorous method of student ratings of teachers[2] that ought to be applied (with appropriate adaptations) to higher education teaching. it appears to work will in k-12 teaching.[1] <link>[2] <link>
student course evaluations get an 'f'
the methodology used in the study from italy[1] mentioned in the article was quite interesting, because it had actual random assignment of comparable groups of students to different instructors. the students were then followed up over their further study in the same university. that's a good kind of data set to have for examining issues like this.the alternative means of evaluating teachers mentioned in the article are also quite reasonable. both peer review and content analysis of instructor-prepared materials can identify better teachers with different sources of bias in the evaluation, allowing a triangulation with student ratings.here on hacker news, from another participant, i learned about a more rigorous method of student ratings of teachers[2] that ought to be applied (with appropriate adaptations) to higher education teaching. it appears to work will in k-12 teaching.[1] <link>[2] <link>
and of course, those with legitimate complaints with the prof simply drop the class a few days or weeks in, and never get a chance to fill out an evaluation...
student course evaluations get an 'f'
and of course, those with legitimate complaints with the prof simply drop the class a few days or weeks in, and never get a chance to fill out an evaluation...
so here is a fun fact about professor performance. when i was a grad student, various calculus classes had a shared final. we had some professors with great evaluations who devoted their life to teaching. we had some germans and chinese who students perceived to barely speak english [1] and hated. student evals were fairly predictable - hard exams would reduce scores, jokes and sympathy would increase them.everyone's students followed the exact same normal curve.i've been told that the only way one can make an observable difference in group final scores is to schedule a class at the same time as sports practice or remedial education. you need better students, not a better teacher.[1] i've noticed that either students are unable/unwilling to listen to a foreign accent, or perhaps i am unable to notice them. recently a friend of mine said my secretary had a heavy accent - i barely notice it.
cloud server reboots
and this is why live migrations (vmware vmotion, but also done by google compute engine) are so awesome. migrate vms from server x to server y, then patch and reboot server x. no vm downtime.
the timing of this announcement stinks. 2130 pdt on a friday night, long after most folks have gone home. making it even more painful, rackspace is providing 1 hour advance notification. for those of us hosted in the us, there's a rolling reboot window that starts at 0400 pdt on sunday morning. so, if you're a rackspace customer and care that your app shuts down cleanly and restarts properly, you get to wake up at 0400 pdt and check your email and stay near your laptop and internet at least once an hour, every hour, until (potentially) 0400 pdt on monday morning. hooray, my sunday plans are fucked!they suggest taking backups/snapshots of your instances before the reboot window. given the throughput required to push multi-hundred gb images from public cloud servers to cloud files for storage, i am willing to bet that the backup network is maxed out and will stay maxed out until the outage.i wonder if rackspace found out about the rumored xen exploit from the same people that told amazon, or if amazon told rackspace but waited a little bit to make it more painful for rackspace's customers...
cloud server reboots
the timing of this announcement stinks. 2130 pdt on a friday night, long after most folks have gone home. making it even more painful, rackspace is providing 1 hour advance notification. for those of us hosted in the us, there's a rolling reboot window that starts at 0400 pdt on sunday morning. so, if you're a rackspace customer and care that your app shuts down cleanly and restarts properly, you get to wake up at 0400 pdt and check your email and stay near your laptop and internet at least once an hour, every hour, until (potentially) 0400 pdt on monday morning. hooray, my sunday plans are fucked!they suggest taking backups/snapshots of your instances before the reboot window. given the throughput required to push multi-hundred gb images from public cloud servers to cloud files for storage, i am willing to bet that the backup network is maxed out and will stay maxed out until the outage.i wonder if rackspace found out about the rumored xen exploit from the same people that told amazon, or if amazon told rackspace but waited a little bit to make it more painful for rackspace's customers...
the xen vulnerability must be something severe if they are all doing this. [1] <link>
cloud server reboots
the xen vulnerability must be something severe if they are all doing this. [1] <link>
i envy the aws users who enjoyed the rolling reboots (which were az aware!) across a small minority of the ec2 fleet. (~10%, yeah?)at some point on sunday, i'm going to be picking the pieces of our entire stack.rackspace doesn't even offer anything like availability zones.the last major maintenance they scheduled was over the july 4th weekend -- wasn't happy with that one either.
cloud server reboots
i envy the aws users who enjoyed the rolling reboots (which were az aware!) across a small minority of the ec2 fleet. (~10%, yeah?)at some point on sunday, i'm going to be picking the pieces of our entire stack.rackspace doesn't even offer anything like availability zones.the last major maintenance they scheduled was over the july 4th weekend -- wasn't happy with that one either.
those of you who are good at sysadminning don't need this advice, but for the fellow people who are only borderline competent in the room, pay particular attention to this reboot if you recently did &quot;apt-get update&quot; or similar to take care of the bash problem. i've shot myself in the foot before and accepted new updates to e.g. mysql that caused the existing config file to raise a hard error on load, which was only discovered the next time the server was restarted. as you can imagine, inability to boot the database has unpleasant consequences for web apps connected to it.
postgresql outperforms mongodb in new round of tests
only problem i have with json on postgres is you can't update a property of a json object like so: update table set jsoncol-&gt;propertya = 42; you need to write an extension for that. easiest to do so using python but sadly heroku doesn't support python on postgres since its unsafe.
i never really &quot;got&quot; the new wave of nosql databases. mongo seemed to be the one i could most easily wrap my head around, but still.i was never sure, though, if that meant i had never faced a problem suitable for one of these dbmss or if my mind is just so warped by years of using relational engines (mostly postgres, or sqlite for simple projects) that i could not think of modeling my data any other way.recently though, i had to get familiar with the database schema of the erp system we use at work, plus some modifications that have been done to it over the years, and it kind of feels to me like somebody was trying force a square peg through a round hole (i.e. trying to model data in relational terms, either not fully &quot;getting&quot; the relational model or using data that simply refuses to be modeled in that way).i sometimes think the people who wrote the erp system might have enjoyed a nosql dbms. then again, with a multi-user erp system, you &lt;i&gt;really&lt;/i&gt; want transactions (personally, i feel that acid-compliant transactions are single most useful benefit of rdbms engines), and most nosql-engines seem to kind of not have them.
postgresql outperforms mongodb in new round of tests
i never really &quot;got&quot; the new wave of nosql databases. mongo seemed to be the one i could most easily wrap my head around, but still.i was never sure, though, if that meant i had never faced a problem suitable for one of these dbmss or if my mind is just so warped by years of using relational engines (mostly postgres, or sqlite for simple projects) that i could not think of modeling my data any other way.recently though, i had to get familiar with the database schema of the erp system we use at work, plus some modifications that have been done to it over the years, and it kind of feels to me like somebody was trying force a square peg through a round hole (i.e. trying to model data in relational terms, either not fully &quot;getting&quot; the relational model or using data that simply refuses to be modeled in that way).i sometimes think the people who wrote the erp system might have enjoyed a nosql dbms. then again, with a multi-user erp system, you &lt;i&gt;really&lt;/i&gt; want transactions (personally, i feel that acid-compliant transactions are single most useful benefit of rdbms engines), and most nosql-engines seem to kind of not have them.
did i miss something? mongodb was never ever faster than postgres. that's nothing new. most of these things are clear when one reads the mongodb docs:mongodb stores metadata, (nearly) uncompressed on a per document basis, so of course it uses way more diskspace. it doesn't store the data in any efficient way either.also it's pretty much unoptimized, compared to postgres which has been around for a really long time so it's kinda slow.many functions in mongodb are actually implemented in javascript, not c. so that's also a factor, even when i guess it's not the big one here.mongodb has a lot of limitations that can really bite yo (document size even though that's the smallest (gridfs), how you can do indices, even limitations in how your query can look like, etcthe only thing that's good about mongodb is that it's nice for getting something up and running quickly and that it's a charm to scale (in many different kinds), compared to postgresql. if postgresql had something built in(!) coming at least close to that (and development has a strong focus there) it would be perfect.for all these reasons many companies actually have hybrid systems, because sometimes one thing makes sense and sometimes the other.the benchmark seems strange, cause there are many sql and nosql databases that are faster and that's a kinda well-known fact. i think everyone who ever had to decide on a database system has known that, even without a benchmark.this makes it kinda look like an advertisement (look at the company behind the blog).using postgresql 9.3 with json for a while now and it's great. also i know it is possible to scale postgresql and it's really nice. still a lot more complexity involved (again, depending on the use case).just use the right tool and please let's stop with such shallow comparisons, because i think it kinda harms the reputation of database engineers and system architects - and the authors of such comparison. when you look for real comparisons and example use cases, typical patterns or just some help one always stumbles across these things and they tend to quickly be out of date too, cause all well-known databases have a lot of active development going on.
postgresql outperforms mongodb in new round of tests
did i miss something? mongodb was never ever faster than postgres. that's nothing new. most of these things are clear when one reads the mongodb docs:mongodb stores metadata, (nearly) uncompressed on a per document basis, so of course it uses way more diskspace. it doesn't store the data in any efficient way either.also it's pretty much unoptimized, compared to postgres which has been around for a really long time so it's kinda slow.many functions in mongodb are actually implemented in javascript, not c. so that's also a factor, even when i guess it's not the big one here.mongodb has a lot of limitations that can really bite yo (document size even though that's the smallest (gridfs), how you can do indices, even limitations in how your query can look like, etcthe only thing that's good about mongodb is that it's nice for getting something up and running quickly and that it's a charm to scale (in many different kinds), compared to postgresql. if postgresql had something built in(!) coming at least close to that (and development has a strong focus there) it would be perfect.for all these reasons many companies actually have hybrid systems, because sometimes one thing makes sense and sometimes the other.the benchmark seems strange, cause there are many sql and nosql databases that are faster and that's a kinda well-known fact. i think everyone who ever had to decide on a database system has known that, even without a benchmark.this makes it kinda look like an advertisement (look at the company behind the blog).using postgresql 9.3 with json for a while now and it's great. also i know it is possible to scale postgresql and it's really nice. still a lot more complexity involved (again, depending on the use case).just use the right tool and please let's stop with such shallow comparisons, because i think it kinda harms the reputation of database engineers and system architects - and the authors of such comparison. when you look for real comparisons and example use cases, typical patterns or just some help one always stumbles across these things and they tend to quickly be out of date too, cause all well-known databases have a lot of active development going on.
this benchmark misses the entire point of mongodb: that you can atomically update individual fields in the document.thas has not been possible with postgres json storage type. instead, the entire json blob must be read out, modified, and inserted back in.this reality is well known to those that understand postgres, which is why they have hstore. hstore is limited though (particularly to the size of the store), so there is work underway to make it more competitive with mongodb.so now they are also releasing a jsonb (b for binary) storage format, which looks promising, but i can't find any information on exactly what its features are. i would love to actually see a benchmark comparing field updates, but this benchmark is not it.mongodb is a database with trade-offs, downsides, and more crappy edge cases then mysql, but it does exist because at its core it allows data modeling that traditional sql databases are lacking.mongodb has first class arrays rather than forcing you to do joins. it supports schema-less data, which is rarely useful, but when you need it can be very useful. it can do inserts and count increments very quickly (yes the write lock means you eventually have to put collections in separate databases), which is also useful for certain use cases.
postgresql outperforms mongodb in new round of tests
this benchmark misses the entire point of mongodb: that you can atomically update individual fields in the document.thas has not been possible with postgres json storage type. instead, the entire json blob must be read out, modified, and inserted back in.this reality is well known to those that understand postgres, which is why they have hstore. hstore is limited though (particularly to the size of the store), so there is work underway to make it more competitive with mongodb.so now they are also releasing a jsonb (b for binary) storage format, which looks promising, but i can't find any information on exactly what its features are. i would love to actually see a benchmark comparing field updates, but this benchmark is not it.mongodb is a database with trade-offs, downsides, and more crappy edge cases then mysql, but it does exist because at its core it allows data modeling that traditional sql databases are lacking.mongodb has first class arrays rather than forcing you to do joins. it supports schema-less data, which is rarely useful, but when you need it can be very useful. it can do inserts and count increments very quickly (yes the write lock means you eventually have to put collections in separate databases), which is also useful for certain use cases.
this is very interesting. around when mongodb was quickly becoming the cool thing to do i'd ask people about why it is better than just storing things in postgres. people would have answers that would be grammatically correct but would not make any sense.that being said, i find it weird that now it is cool to make fun of mongodb. some people on this thread have even said they want to know if a service is using mongodb and they'd not use that service. i am pretty sure they'd be all over stripe (who store your money related stuff in mongodb) in a different thread.
who's afraid of bromine?
x0x0 and htsthbjig are right, this is an industry puff piece. in a previous job i researched brominated flame retardants and it is well documented that they cause developmental disruptions, cancer, and reproductive toxicity. one of the main dangers is that they are not chemically bonded to the plastic and foam that they are added to, like in your couch cushions. this means they get into dust, which is particularly a hazard for babies crawling on the floor.the laws are also outdated and not very practical- in california, which has some of the most rigorous regulations on environmental health, the test for a flame retardant cushion is to hold a candle to a piece of foam for 12 seconds.the chicago tribune did a series on the tobacco industry's role in getting brominated flame retardants into furniture, which i recommend reading for some context on the current us regulations: <link> to add links to some studies: <link> <link> <link>
this article is an industry puff piece. the author writes this highlights an unavoidable problem for the chemicals industry - much of what they do is still a learning process, and it often takes many years for the long-term risks inherent in a particular product to emerge. yet it is also important to get these risks in perspective. so far, there are no known cases of brominated fire retardants actually causing anyone major health problems - they are being banned because of the potential hazard they pose. well, yes -- you have to basically be an idiot to take a bioaccumulative lipophilic chemical that, after all, is unnecessary (we can still build flame-retardant materials without them, and somehow pepsi still makes pepsi) and use them without incredibly thorough health testing. we know some of these flame retardants disrupt thyroid hormones and may impair neurodevelopment. the risks outweigh the benefits, and i decline to be in the experiment pool.
who's afraid of bromine?
this article is an industry puff piece. the author writes this highlights an unavoidable problem for the chemicals industry - much of what they do is still a learning process, and it often takes many years for the long-term risks inherent in a particular product to emerge. yet it is also important to get these risks in perspective. so far, there are no known cases of brominated fire retardants actually causing anyone major health problems - they are being banned because of the potential hazard they pose. well, yes -- you have to basically be an idiot to take a bioaccumulative lipophilic chemical that, after all, is unnecessary (we can still build flame-retardant materials without them, and somehow pepsi still makes pepsi) and use them without incredibly thorough health testing. we know some of these flame retardants disrupt thyroid hormones and may impair neurodevelopment. the risks outweigh the benefits, and i decline to be in the experiment pool.
&gt; a fire is a self-perpetuating chemical reaction in which the high temperature encourages fuel to combine with oxygen in the air, further raising the temperature in the process.kinda ot but this sentence finally cleared up for me how burning works.
who's afraid of bromine?
&gt; a fire is a self-perpetuating chemical reaction in which the high temperature encourages fuel to combine with oxygen in the air, further raising the temperature in the process.kinda ot but this sentence finally cleared up for me how burning works.
bromine, bromide and brominated organic compounds are all completely different chemicals. like chlorine gas (cl2) and the chloride ion in table salt (nacl), they behave completely different.even with the brominated organic chemicals, each one can behave differently. very subtle differences in organic molecules can lead to drastically different behaviour. lumping all those compounds together doesn't make much sense, they have to be evaluated individually (or in closely-related groups).
who's afraid of bromine?
bromine, bromide and brominated organic compounds are all completely different chemicals. like chlorine gas (cl2) and the chloride ion in table salt (nacl), they behave completely different.even with the brominated organic chemicals, each one can behave differently. very subtle differences in organic molecules can lead to drastically different behaviour. lumping all those compounds together doesn't make much sense, they have to be evaluated individually (or in closely-related groups).
i once had to travel with a vial of bromine. besides sealing it up pretty tight. i had a plan if i was caught getting on the plane - tell them it was cow's blood, which shouldn't be very terrorizing. if i was caught getting off the plane i'd tell them it was bromine because cow's blood would be disallowed by customs.
curses, fooled again
i feel cheated that this isn't about the terminal control library.
skip the article. three points seem most important:&gt; &quot;i am allen funt's son!&quot;&gt; a candid camera remake is coming soon. watch it!&gt; the nyt can't find enough news, so it fills space with entertainment
curses, fooled again
skip the article. three points seem most important:&gt; &quot;i am allen funt's son!&quot;&gt; a candid camera remake is coming soon. watch it!&gt; the nyt can't find enough news, so it fills space with entertainment
i feel like this advertorial for candid camera, appearing in the new york times, is a meta-prank on the reader.seems like the show relies on the kindness of strangers and that still exists, even in a smartphone world.
curses, fooled again
i feel like this advertorial for candid camera, appearing in the new york times, is a meta-prank on the reader.seems like the show relies on the kindness of strangers and that still exists, even in a smartphone world.
i don't get it. these &quot;pranks&quot; are all just slightly silly, but not completely absurd.for those behind a paywall, this is about candid camera, and some newer material they're trying. among which :- postman telling people their mail will be delivered by drone (with a drone coming by dropping it off)- people getting told that they'll be charged $10 &quot;in store fee&quot; for not buying online- people being asked for 3 forms of photo id to pay by credit card- hired a cop to enforce a “2 m.p.h. pedestrian speed limit.”the political ones are a bit sillier&gt; we showed new yorkers petitions to recall state officials, but the names were all fictitious. most people supported the effort, among them a lawyer who carefully explained that one should never sign anything without complete knowledge of the facts, and then signed anyway.&gt;our actress posing as a candidate obtained dozens of campaign signatures without ever stating a position, a party or even her last name.the last one shows an example of &quot;real life&quot; bikeshedding though:&gt; i told residents in queens, n.y., that they would now be required to separate household trash into eight different color-coded bins. i can’t imagine someone being more passionate about any world controversy than the gentleman who was incensed about a bin devoted to “poultry waste.” “how,” he asked, “am i going to eat enough chicken in two weeks to fill that up?”
curses, fooled again
i don't get it. these &quot;pranks&quot; are all just slightly silly, but not completely absurd.for those behind a paywall, this is about candid camera, and some newer material they're trying. among which :- postman telling people their mail will be delivered by drone (with a drone coming by dropping it off)- people getting told that they'll be charged $10 &quot;in store fee&quot; for not buying online- people being asked for 3 forms of photo id to pay by credit card- hired a cop to enforce a “2 m.p.h. pedestrian speed limit.”the political ones are a bit sillier&gt; we showed new yorkers petitions to recall state officials, but the names were all fictitious. most people supported the effort, among them a lawyer who carefully explained that one should never sign anything without complete knowledge of the facts, and then signed anyway.&gt;our actress posing as a candidate obtained dozens of campaign signatures without ever stating a position, a party or even her last name.the last one shows an example of &quot;real life&quot; bikeshedding though:&gt; i told residents in queens, n.y., that they would now be required to separate household trash into eight different color-coded bins. i can’t imagine someone being more passionate about any world controversy than the gentleman who was incensed about a bin devoted to “poultry waste.” “how,” he asked, “am i going to eat enough chicken in two weeks to fill that up?”
those weren't very good pranks, imo. they're all very similar to things that already exist.separating garbage into 8 categories? we already do 3 in real life. it's a matter of time before it's more.a broken yogurt machine? it happens.the political ones aren't even surprising. they could have put real names of city council members on the list, and i'm sure 90% or more of the participants wouldn't know if they were real or not.
americans throw out more food than plastic, paper, metal, and glass
even if you ignore the symbolism of throwing food while people still die of hunger, i am surprised by the &quot;so what?&quot; reaction of this crowd. isn't yc about improving efficiencies? yes there is waste elsewhere, the average server utilization in a data center is 12% but that's one reason the cloud came along with a 60% utilization (see <link> and new technologies like docker will likely make that even higher.so instead of saying &quot;so what?&quot; shouldn't we say: &quot;we can do better and this is a business opportunity&quot;?
it's economics. if i buy half a gallon of half and half, i know full well i don't need half a gallon for my coffee before it spoils.i estimate that between 40% and 0% of it will be wasted on average depending on how many weeks it lasts in the fridge. but if i compare it to buying a quart, a quart will cost me around $2.49 while half a gallon would cost me $3.79this means that for a cost of $3.79 i get to use 0.4 gallons on average, while for $4.98 i get to use 0.5 gallons (assuming a quart never expires, which isn't true, i've had a quart go bad after a little over a week)this means that for 0.4 gallons buying quart by quart i have to pay 3.98, but buying half a gallon i pay less. so my strategy is then to buy quarts when they're on sale (when i can get them for $1.99) and half a gallon when i have no discount.
americans throw out more food than plastic, paper, metal, and glass
it's economics. if i buy half a gallon of half and half, i know full well i don't need half a gallon for my coffee before it spoils.i estimate that between 40% and 0% of it will be wasted on average depending on how many weeks it lasts in the fridge. but if i compare it to buying a quart, a quart will cost me around $2.49 while half a gallon would cost me $3.79this means that for a cost of $3.79 i get to use 0.4 gallons on average, while for $4.98 i get to use 0.5 gallons (assuming a quart never expires, which isn't true, i've had a quart go bad after a little over a week)this means that for 0.4 gallons buying quart by quart i have to pay 3.98, but buying half a gallon i pay less. so my strategy is then to buy quarts when they're on sale (when i can get them for $1.99) and half a gallon when i have no discount.
tentatively - this might not really be much of a problem? in many situations, we deliberately make sure we have excess capacity because it's better to have too much than too little. for example, i have a server running idle most of the time, but it's available to pick up a large long-running task when a user requests it. analogously, when catering for an event, people often aim to have more than enough food for almost all scenarios, so they'll throw away food almost all the time. running a restaurant is catering an event every day. when i was a teenager, i worked at subway and we'd try to have more than enough bread (we only ran out once when we didn't anticipate a large sports-win-based city-wide party), so we'd throw away excess bread at the end of each day.of course, people are starving and that's terrible. but reducing food waste in developed countries doesn't seem like a powerful lever to reduce it. non-wasted food isn't going to be transported to developing nations. reducing food demand would likely decrease food prices, but i wouldn't expect a large decrease. if my understanding is correct, the food supply is quite elastic, meaning we can easily produce more if there's people willing and able to pay for it.i think that the two main problems behind starvation are the lack of purchasing power of the poverty-stricken, and broken political systems. solving those would eliminate starvation, even if the developed nations waste as much food as they like.(i'm no expert on this and welcome any corrections or opposing views.)
americans throw out more food than plastic, paper, metal, and glass
tentatively - this might not really be much of a problem? in many situations, we deliberately make sure we have excess capacity because it's better to have too much than too little. for example, i have a server running idle most of the time, but it's available to pick up a large long-running task when a user requests it. analogously, when catering for an event, people often aim to have more than enough food for almost all scenarios, so they'll throw away food almost all the time. running a restaurant is catering an event every day. when i was a teenager, i worked at subway and we'd try to have more than enough bread (we only ran out once when we didn't anticipate a large sports-win-based city-wide party), so we'd throw away excess bread at the end of each day.of course, people are starving and that's terrible. but reducing food waste in developed countries doesn't seem like a powerful lever to reduce it. non-wasted food isn't going to be transported to developing nations. reducing food demand would likely decrease food prices, but i wouldn't expect a large decrease. if my understanding is correct, the food supply is quite elastic, meaning we can easily produce more if there's people willing and able to pay for it.i think that the two main problems behind starvation are the lack of purchasing power of the poverty-stricken, and broken political systems. solving those would eliminate starvation, even if the developed nations waste as much food as they like.(i'm no expert on this and welcome any corrections or opposing views.)
food production has become cheap and efficient. so efficient in fact, that it has apparently no significant economic impact if &quot;as much as 40 percent of america's food supply ends up in a dumpster&quot;. otherwise, the market would improve efficiency with regards to wasting all that food.i find particularly appalling that the production of animal products has become so economical that it does not matter that so much is thrown away. efficient meat production means of course factory farming and questionable treatment of animals. this is a problem. the other problem is with the consumers of such cheap animal products. they are not aware anymore, have no respect for the fact that the thing they just gobbled up (and threw away) was once a living breathing being. it is sad.
americans throw out more food than plastic, paper, metal, and glass
food production has become cheap and efficient. so efficient in fact, that it has apparently no significant economic impact if &quot;as much as 40 percent of america's food supply ends up in a dumpster&quot;. otherwise, the market would improve efficiency with regards to wasting all that food.i find particularly appalling that the production of animal products has become so economical that it does not matter that so much is thrown away. efficient meat production means of course factory farming and questionable treatment of animals. this is a problem. the other problem is with the consumers of such cheap animal products. they are not aware anymore, have no respect for the fact that the thing they just gobbled up (and threw away) was once a living breathing being. it is sad.
but there's another less apparent problem with food waste: the threat to the environment. landfills full of decomposing food release methane, which is said to be at least 20 times more lethal a greenhouse gas than carbon dioxide.surely this just releases carbon that was fixed during the development of the plant/animal that is now decomposing? i'm a little unclear on why this is worse than natural decomposition.i would like to know why the amount of food wasted took off in 1980 and nearly quadrupled over the following 3 decades. i'm guessing changes in usda standards about acceptable quality of food for sale, but i was hoping the author would have researched this.