A new breed of dating website has entered the business with half a million dollars of financial backing and a radical approach that will leave you either intrigued or more than a little weirded out. According to Tech Crunch the site will allow users of the facebook application to find you based on your facebook interests, spotify history, netflix and even amazon data. Even if you haven’t installed the application yourself.
I’ll be interested to see how people feel about this. I’ve got a feeling people will find it too strange to be contacted out of the blue on facebook by strangers but the option to ask mutual friends for an introduction sounds like it could be more socially viable. I imagine the sharing of our data without our consent is something that is going to leave a lot of people very upset but this may prove not to impact on the success of the new venture because I’m sure the prospect of a date invitation from an attractive stranger will be too much of a lure for most to turn down on principle. It certainly seems that a growing majority of people no longer have qualms about sharing their personal information on Facebook. When it comes to the use of our data, the application will have access to nothing that isn’t public data anyway and that says a lot about the current state of our privacy.
As the graphic above demonstrates this certainly isn’t the first time a dating website has matched users based on their interests but this is the first time this is being done on Facebook without the date’s consent. Perhaps soon we’ll be discussing the effect of the Facebook filter bubble in the same terms as the Google filter bubble. On the other hand you now have another reason to “like” Neurobonkers on facebook, what single person wouldn’t want to meet a fellow sceptical, science loving, critically minded partner?
Seeing around corners has always been a nut that has just been too hard to crack, until now…
An MIT research team have developed a camera that can do just that by using a new form of photography called Femto-photography which exploits the finite speed of light and analyzes ‘echoes of light’.
The project has a long way to go but promises potential applications in fields ranging from medical imaging to transport. Check out the team’s video explaining how to camera works:
‘A laser pulse that lasts less than one trillionth of a second is used as a flash and the light returning from the scene is collected by a camera at the equivalent of close to 1 trillion frames per second.’
From the research team’s website
Velten, A., Willwacher, T., Gupta, O., Veeraraghavan, A., Bawendi, M., & Raskar, R. (2012). Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging Nature Communications, 3 DOI: 10.1038/ncomms1747 (PDF)
The Piratebay have just announced their next plan to avoid shutdown. They are sending out GPS controlled drones that will fly “some kilometers up in the air” to carry their anonymizing forwarding servers, which now standing at a grand total of 90MB will fit safely on to a Raspberry Pi. No, it’s not April Fools day just yet, this from the TPG website:
“This way our machines will have to be shut down with aeroplanes in order to shut down the system.”
“With modern radio transmitters we can get over 100Mbps per node up to 50km away. For the proxy system we’re building, that’s more than enough.”
This comes only a month after the pirate bay announced their entrance in to the world of 3D printing piracy. It seems despite the will of the courts the pirate bay will not be going down without somewhat more of a fight.
To learn more about the latest developments in the next generation of quadcopters check out the ted talk by the leader of the leading research group:
These are live word clouds created using the breaking news and science news feeds of providers including BBC, Guardian, Reuters, Al Jazeera, Nature, CNN, CBC, New Scientist, The Economist, Wall Street Journal, Fincancial Times, Associated Press and the New York Times to name a few. The stream is powered by data aggregator infomous who have also built a tool for searching twitter which is in some respects, far more useful than twitter’s own search function! If you’re like me, you typically only see your home page for a split second before you begin typing in the address or search bar so a news cloud makes for the perfect home page.
A group at Berkeley has just published (£) the first successful attempt to reconstruct colour video imagery from the mind using an fMRI brain scanner. The results are startling, encouraging and a little bit scary.
The method used, called fMRI, is known for its high spatial resolution (3D imaging ability) but notoriously low temporal resolution (measurements with respect to time – effectively a slow shutter speed). In the past, this has been a barrier to research on the visual cortex because of the incredibly high rate of information processing in the visual system. However, scientists have recently developed a new MRI encoding method that allows for the modelling of brain activity in the visual system at a faster rate. In the experiment conducted by the Berkeley group, participants were shown 7,200 seconds of random colour video clips one time only, while their brains were scanned using the novel fMRI sequence. From these scans, researchers were able to create a “dictionary” of brain activity in the visual system.
After a dictionary of brain activity in the visual system was created, the participants watched a fresh unseen video from YouTube while undergoing a brain scan. This resulted in video outputs that resembled the new YouTube video shown to participants. The output appears as a collage of flickering pixels that reminds me of a cross between the paintings of prosopagnosia sufferer Chuck Close and the imagery in A Scanner Darkly.
The correlation between the videos shown to the participants and the output imagery in the collage-like videos (below) is pretty astounding especially when considering there is zero overlap between the clips used for calibration and the clips used to test the system.
The study authors suggest that the method used in this paper could eventually lead to the generation of video output from participants experiencing dreams or hallucinations. Watch this space! What was once a field reserved firmly for science fiction may fast be becoming a reality.
Listen to an NPR interview with the researchers here:
Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, & Gallant JL (2011). Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies. Current biology : CB PMID: 21945275
Subscribecontact directly by simply hitting reply to the email. You will never receive spam under any circumstances and you can unsubscribe at any time with one click. Alternately, use the link below to subscribe via RSS or your favourite reading platform.
Africa America Bad Science BCI Brain Computer Interfacing breaking news Cannabis Censorship Cocaine Copyright Counterfeit Drugs Daily Fail DailyFail daily mail Daily Mail Demolition Squad Drugs EEG Emotiv Fake Drugs FMRI Health Hoax Independent Misinformation Music Neuroscience Open Science Procrastination Psychology Rat Brain Robot Review Satire Science sex Skepticism Statistics Student Loans Crisis Susan Greenfield Synaesthesia Technology The confederacy of dunces Video walking War on Drugs Wikileaks