Images dating back to 2007 have surfaced showing the production lines for Apple’s first model of iPhone offer a hint of what it was like to assemble the first-ever version of the groundbreaking smartphone.
Released in 2007, the original iPhone was a major technological advancement, both for Apple itself and to the industry as a whole. The hardware has also forced assembly partners to improve their practices over time, including expanding their facilities and workforce, and how they actually assemble the products in time for release.
Photographs released by Bob Burrough on December 24 and first reported byiPhone in Canada show some of the work that went into assembling that first iPhone, with images from spring 2007 showing inside “the iPhone factory.” The four images, posted to Twitter, depict a few of the latter stages of assembly, and appear to have been taken within a Foxconn facility.
Rather than actual assembly, the photographs seem to show quality assurance and testing taking place, complete with shelves of iPhones connected with wires in a mass testing rack. One shot shows test software running on units, while another has employees connecting individual iPhones up to testing equipment for final checks.
Given its original introduction by Apple co-founder Steve Jobs in January 2007 and its release months later in June, the declared spring timing for the photographs suggests that Foxconn was either close to starting full production, or had already reached it. At the time, Apple wouldn’t have had the same level of supply chain control as it exerts now, so assembly and shipments were likely to have taken far longer.
Bob Burrough is a former Apple engineer, who in 2017 claimed Apple had changed to become a more hierarchical company. In 2007, the time period when the photographs were taken, Apple was organizationally “the wild west,” said Burrough in an interview, with employees working outside their main roles due to projects taking a precedence over the corporate structure.
According to Burrough, current CEO TimCook had attempted to eliminate executive conflict within Apple, and improve the structure of middle manager, a plan that was thought to have crippled the old spirit of the company that thrived under Jobs.
Since Apple announced the transition to their on SoC ARM technology many has wondering the impact of this for the future of the business. The truth is that this change will be huge and might represent how other companies will fight back Intel and its dominance of the processor market.
Apple’s M1 Processor Isn’t Just an Indictment of Intel, It’s a Direct Shot at Microsoft
The latest Macs are faster and more efficient, and highlight the difference between Apple and everyone else
Apple made no secret of the fact that it had grown impatient with Intel’s inability to stick to its own roadmap. Specifically, Intel continues to fall behind its stated goal of using 5-nanometer transistors in its chips.
Apple, on the other hand, has been making chips for a decade, the latest of which–the A14, is the fastest smartphone processor ever. It’s also the first 5-nanometer chip used in any mainstream device and it currently powers the iPhone 12 and the fourth-generation iPad Air.
Last week, Apple introduced the M1, which is based on the same processor architecture, though with more CPU and GPU cores. More importantly, it’s made with the same 5-nanometer process. That’s important because that smaller process allows chips to be more efficient, meaning they get more performance for the same amount of energy.
Apple was clearly ready to be done with Intel after years of being unable to build the Mac it really wanted, due to the lagging production schedules of the Intel Core processors it was using. To that end, Apple made its point. The M1 isn’t just a little bit faster or more efficient, it’s significantly so. So much, in fact, that it’s a little embarrassing.
But the real shot across the bow here is directed at Microsoft, which has tried to build its own ARM chip, a variant of Samsung’s Snapdragon. I’ve used several ARM-based PCs, including the Surface Pro X, the Samsung Galaxy Book S, and the Lenovo Flex 5G. Each have interesting reasons for existing, but none of those are ready for, well, prime time.
The transition to Apple Silicon has been long coming and developers are finally able to start adapting their apps for ARM-based Macs. Between virtualization software and live-app translation of Intel-based apps, Apple has developers and consumers covered. The company plans on supporting their Intel Macs for the next few years, but it is clear that custom ARM silicon is the future for Mac.
● All iPad and iPhone apps will run natively ● macOS Big Sur works on Intel and M1 ● Thunderbolt is still supported ● Rosetta 2, Universal 2, and Virtualization ● MacBook Air, MacBook Pro, and Mac mini first with M1 ● Entire transition will take 2 years
Apple Silicon doesn’t refer to a specific chipset or processor, but to the company’s custom silicon as a whole. Its development lets the company focus on performance and vertical integration across platforms rather than needing to optimize software to work with another company’s hardware.
The custom processors developed by Apple have benefitted iPhone and iPad for years, and now they will benefit the Mac too.Breaking down the power of Apple’s M1 chip
Following more than a decade of chip architecture experience gleaned from developing the A-series processors, Apple has prepared the way for Apple Silicon on Mac with macOS Big Sur, Mac Catalyst, and several other developer platforms.
Apple Silicon Ecosystem
The first custom processors made by Apple were made out of necessity because Intel did not want to design chips for the iPhone. It was because of this that Apple was able to build its processors for the iPhone and ensure complete vertical integration with the software.
The A-series chips went on to become the most powerful and efficient mobile chipsets available, and Qualcomm and even Intel could not keep up. Now the Mac has M-series chips pushing beyond what was possible when running Intel on the Mac.
M1: the Mac custom processor
Apple’s M1 chip powering the new Macs
While Apple has made very powerful chipsets for the iPhone and iPad, those will not be used in their new Macs. There will be a specific system-on-a-chip architecture used for Macs and MacBooks called the M1. The first Macs to use the new chipset will be the MacBook Air, 13-inch MacBook Pro, and Mac mini.
The M1 uses a 5nm architecture with 16 billion transistors, four high-performance cores, four high-efficiency cores, and eight GPU cores. In the MacBook Air, it runs faster than 98% of consumer notebooks on the market.
Apple boasts the M1 as the world’s fastest CPU in low-power silicon, the world’s best CPU performance per watt, the world’s fastest graphics in a personal computer, and breakthrough machine learning thanks to the Neural engine. This means the M1 has a 3.5x faster CPU, 6x faster GPU, and 15x faster ML than previous Macs using Intel.
The GPU is capable of running nearly 25,000 threads simultaneously with 2.6 teraflops of throughput. Apple says this makes it the fastest integrated GPU in a consumer PC.
The webcam used on the new MacBooks remains 720p, but the M1’s ML and ISP improvementswill improve the overall image.https://www.youtube.com/embed/ArboImyz2og
Expected Release Dates
Well known Apple analyst Ming-Chi Kuo has reported on multiple occasions that Macs with Apple Silicon will start shipping in late 2020. A report in July gave a little more detail.
First model could be a 13.3-inch MacBook Pro in 4Q20
Multiple MacBook models expected in 4Q20
MacBook Air in 4Q20 or 1Q21
14-inch MacBook Pro 2Q21 or 3Q21
16-inch MacBook Pro with new design 2Q21 or 3Q21
The report has proven true so far, as Apple announced the MacBook Air and 13-inch MacBook Pro during its “One More Thing” event in November 2020. The report did miss the Mac mini update, which was also released during the event.
iPhone and iPad
Apple has advanced GPU performance by 1000x on iPad Apple Silicon
During the 2020 WWDC, Apple boasted about successfully bringing 10 billion chips to devices through the years, and want to bring that expertise to Mac. They feel that they can hit the sweet spot between power consumption and performance by offering chips that can be very powerful while remaining very efficient.
Over the past decade of custom chip building, Apple has been able to increase CPU performance by 100x and GPU performance by 1000x.
Apple also designed new system architectures and technologies to specifically take advantage of their system-on-chip design, like the Neural Engine for machine learning or the Secure Enclave for encryption. Combine those technologies with the existing software implementations like Metal and Swift, and Apple can utilize their custom chipsets far better than with Intel.
The Apple Silicon Transition
The Developer Transition Kit will be a Mac Mini with an A12Z
Apple has provided a Developer Transition Kit that can be ordered by developers using the “Universal App Quick Start Program.” The DTK is a Mac mini running on an A12Z with 16GB of RAM and 512GB of storage, and must be rented for $500 then later returned to Apple.
With this kit, devs can get started making apps run natively on macOS and Apple Silicon. However, the hardware is not all Apple has included to help with the process.
During WWDC, developers could attend virtual sessions or discuss issues with engineers within the forums and the Apple Developer app. Apple also provided day-one documentation on developing and testing Universal apps.
Any app built for iOS or iPadOS will run natively on the Apple Silicon Mac as well.Universal 2, Rosetta 2, and Virtualization software will make the transition smooth
On macOS Big Sur, there are multiple applications built just for the transition. Apple called out three specific ones– Universal 2, Rosetta 2, and Virtualization.
Universal 2 is a universal binary that works on Intel and Apple Silicon-based Macs. With the same binary developers can make apps that work on both platforms.
Third-party developers like Microsoft and Adobe have already begun building apps to work on the new chipset. The WWDC demo showed the new apps running easily even while editing 4K video live.
As Rosetta allowed PowerPC apps to run on Intel Macs, Rosetta 2 is fulfilling the same role to allow Intel apps to run on the new architecture.
Instead of a “just in time” (JIT) process that the original Rosetta used, Rosetta 2 does the heavy lifting on installation with the translation of the code, front-loading the processing load. Code in third-party browsers executing Java and similar other technologies are still using JIT technologies for execution.
As demonstrated at WWDC, Rosetta 2 is powerful enough to run some games built for Intel without major issues.
Virtualization software will also run on Apple Silicon Macs, but the extent of what and how is not fully known yet. Apple has demonstrated Linux use through virtualization apps like Parallels desktop.
Users who need Windows on their Mac may be left out of the transition, as Apple made no mention of the platform during the presentation nor BootCamp.
Apple mentioned that other platforms like Docker will also work on Apple Silicon and devs will be able to take full advantage of the software.
An iPhone small enough for the ones with tiny hands…
Alongside the iPhone 12, Apple has unveiled the 5.4-inch iPhone 12 mini, at its September “Hi, Speed” event.
The iPhone 12 mini replaces the old iPhone 11, with a smaller display at 5.4-inches compared to the iPhone 11 6.1-inch display. It’s also larger than the current iPhone SE’s 4.7-inch screen, with a similar footprint to that device.
As with the rest of the iPhone 12 range, the iPhone 12 mini has a new design. It features flat sides instead of the previous curved one, but continues to be made from glass and aluminum.
The iPhone 12 mini is effectively the same as the iPhone 12, bar the larger screen.
Apple at its “Hi, Speed” keynote event on Tuesday unveiled a new lower-cost smart speaker dubbed the HomePod mini, which will retail for $99.
The device shares a design similar to the standard HomePod, but is spherical and less expensive. Its design also includes a backlit touch surface on the top with playback controls. That surface will also glow when Siri is invoked.
Although it’s smaller, the HomePod mini still places a similar emphasis on high-quality playback with new “computational audio” features combined with standard audio hardware. It doesn’t quite the same bill of materials as HomePod, but it still features a full-range dynamic driver and two passive radiators for advanced bass response. It also packs an acoustic waveguide to provide clear, 360-degree audio playback.
The onboard Apple S5 chip provides the computational audio features, which will include complex tuning models that allow the speaker to intelligently optimize both loudness and dynamic range.
The HomePod mini’s internal components. Credit: Apple
One of the most significant new additions to the HomePod mini is a new feature that will allow it to act as an Ultra Wideband base station to precisely locate U1-equipped devices, like the iPhone and Apple Watch Series 6.
Apple says that HomePod mini will be receiving a “magical” Handoff experience. A HomePod mini will understand when an iPhone is nearby, and will provide audio, visual, and haptic feedback so it feels like two Apple devices are actually physically connected.
But it’s the price that stands out as the most attractive feature of the HomePod mini, and could allow Apple to better compete with the likes of rivals like Amazon and Google. Apple’s smart speaker competitors all offer entry-level devices at prices far below HomePod’s $299 price tag. Additionally, Apple notes that HomePod mini will be compatible with third-party streaming services like Amazon Music and Pandora.
The price point will also bolster users who would like to add several pairs around their home for surround-sound audio. In fact, HomePod mini will also be able to detect other HomePod models nearby and intelligently become a stereo pair.
Rumors of a lower-cost HomePod have surfaced consistently since 2018. Reportedly, Apple was mulling a cheaper home audio device as a way to boost lackluster sales and market share. Most recently, an accurate leaker predicted that Apple would forego a successor to the HomePod and would instead just release the HomePod mini at its Oct. 13 “Hi, Speed” event.
The HomePod mini joins a wave of Apple devices that have gotten lower in cost, a list that also includes the iPhone SE and the $999 MacBook Air.
HomePod mini will retail for $99 and will become available for preorder on Nov. 6. The smart speaker will start shipping out to customers the week of Nov. 16.
Who says Apple always think your wallet will be never safe of shiny new electronics even in the worst recession and crisis the world has experience in decades, Now, in this complex environment, Apple launches one new iPhone model with better 5G connectivity and amazing OLED screens for the pleasure of your eyes.
as the good guys at the Verge said, it might be happening a bit later than usual, but Apple has just announced the iPhone 12. Featuring the same 6.1-inch display size as the iPhone 11 and iPhone XR before it, the latest “main” iPhone — and likely the model most people will gravitate toward — is making the transition from an LCD screen to OLED. And as rumored, the phone has flat sides for an overall look that more closely matches the iPad Pro and iPad Air (plus the iPhone 4 from years ago).
The iPhone 12 will come in black, white, blue, red, and green. Aside from the flat sides, it still largely resembles the iPhone 11. There’s still a notch at the top that houses Apple’s Face ID technology, though the bezels around the screen have been reduced — another perk of the OLED switch. Around back, the iPhone 12 has two cameras housed in a matte glass squircle, which makes for a nice contrast with the rest of the glossy back panel.
The iPhone 12 represents Apple’s first major foray into 5G cellular technologies.
prada has again collaborated with OMA, the architecture firm founded by rem koolhaas, in the design of its boutique store in tokyo. forming part of the miyashita park shopping mall in the city’s shibuya district, the 300 square meter single-storey outlet offers a selection of clothing, bags, accessories, and footwear for men and women in unisex and thematic versions.
opening onto the street, the external façade includes a large window that presents passersby with a view into the dreamlike, yet minimal interior. a black and white chequered floor extends throughout the space, while the walls are clad with backlit green sponge — a material designed by OMA in 2002 for the prada epicenter in los angeles. referred to as ‘prada sponge’, its development started with an architectural model made using a regular cleaning sponge.
‘as the visual effect of this backlit texture was very intriguing, an extensive search was initiated to recreate this material in 1:1 scale,’ OMA said at the time. ‘many hundred tests and prototypes were handmade in order to test hole sizes, percentages of openness, translucencies, depths, colors, etc.. simultaneously, mass production and 3D computer modeling techniques were investigated that could help translate the properties of the handcrafted prototypes and all technical requirements into the final product.’
elsewhere in the store, the luminous ceiling and video wall of the backdrop allow the displayed products to command full attention. found at the center of the space, aluminum display elements enhance the minimal aesthetic and contemporary feel of the interior.
available products include bags, backpacks, accessories, and shoes made of brand new re-nylon, a selection of visual books from prada, as well as exclusive cotton poplin T-shirts featuring original prints dedicated to the store’s opening. one such garment features the prada oval logo, which has been reinterpreted by OMA to include the prada miyashita park store name, and a travel tag print with TYO (tokyo) symbols. see the T-shirt designs below.
name:prada miyashita park location: tokyo, japan design: OMA status: open
takayuki suzuki architecture atelier has designed the two-story wooden house as an open, lightweight construction that connects to its surroundings while maintaining enough privacy for its residents. the large, slanted roof that tops the entire residence brings in the trees of the garden while allowing for views of the park and the seascape in the distance. this way, ‘the residents can feel the presence of the city at every stage of their lives,’ as the studio explains.
the roof extends to the edge of the road, and by adjusting the height of the eaves it is possible to control the amount of visibility from the side of the road. the interior of the house oozes out beyond the boundary line of the road with the intention of connecting with the city. its wooden ceiling is draped with sheer fabric, which adds a lightweight, serene character to the building.
2019 was a complex year, even without a massive pandemic, so when we was able to go anywhere without fear of get sick, art reflects their time, so lets enjoy this pieces of commercial art, thanks to Indiewire, made in the time before the pandemic when we all were happy, we just didn’t know it.
What makes a great movie poster? A stunning composition is certainly a factor, but more importantly is how effective the image is in generating anticipation for its respective movie title. The best film posters provide a first feeling of what it will be like to experience the films themselves on the big screen. It doesn’t always work out. In some cases, the poster can tease an experience better than the film it’s representing (see “Lucy in the Sky” this year). Other times, a great film and a great poster synch up perfectly (see “Ad Astra” this year). One thing is for sure: Movie posters are their own art form, and in 2019 that art form was in damn fine shape.
Below are the 30 movie posters that made the strongest impressions in 2019.
“Portrait of a Lady on Fire”
Neon sent out this gorgeous “Portrait of a Lady on Fire” poster to press in the midst of the movie’s awards campaign. Those fiery orange and red brush strokes serve as a passionate reflection of the movie’s entangled desires.
James Gray’s science-fiction stunner “Ad Astra” issued one poster with Brad Pitt’s big, beautiful face front and center, but it turns out this poster that hid its leading man was far more effective in selling the mind-bending drama of the film’s plot.
“The Beach Bum”
Neon’s poster for Harmony Korine’s “The Beach Bum” perfectly embodies the infectious, neon-soaked energy at the film’s heart. If you’re going to go the conventional poster route by featuring all the characters, then you might as well make it as eye-popping as this one.
“Dark Phoenix” (China Release)
The final 20th Century Fox “X-Men” movie “Dark Phoenix” was one of the biggest box office bombs of the year (Disney CEO Bob Iger even blamed it for hurting the studio’s quarterly earnings) but it at least gave moviegoers this gorgeous one-sheet courtesy of the film’s theatrical for its China release.
“Glass” was supposed to be a celebratory career peak for M. Night Shyamalan as the sequel to “Unbreakable” and “Split,” but instead it ended up being the biggest disappointment of the director’s career. Final product aside, the film’s illustrated one-sheet remains one of the year’s best, perfectly capturing the comic-book heart of Shyamalan’s vision.
Adam Sandler is such a recognizable face in Hollywood that simply presenting a battered and bruised version of him surrounded by ominous darkness is enough to make the “Uncut Gems” poster a knockout.
Photo:Gunpowder & Sky
If you’re promoting a movie where Elisabeth Moss plays a destructive punk rocker, then you might as well put the actress and her sass front and center on the official poster.
Bow down to Florence Pugh, the scream queen of 2019. The actress’ terrified close-up is a warning that Ari Aster’s “Midsommar” is bound to screw with your head and shatter your emotions to pieces.
Would it be a “best of 2019” list for movies without mentioning “Parasite”? One of the first steps in turning Bong Joon Ho’s Palme d’Or winner into a box office sensation was creating a poster that could sell the dangerously entertaining energy of Bong’s vision. Mission accomplished.
“Once Upon a Time in Hollywood”
The first posters for Quentin Tarantino’s “Once Upon a Time in Hollywood” were so aggressively photoshopped that it’s hard to believe Sony was serious in putting them out to the public. Fortunately, the film’s marketing got back on track with a serious of fictional movie posters for titles within “Hollywood” starring protagonist Rick Dalton (Leonardo DiCaprio).
Aisling Franciosi’s performance in Jennifer Kent’s bruising revenge drama “The Nightingale” is one of the year’s most gut-punching, and it’s no small feat that IFC Films was able to capture Franciosi’s wounded anger in one still image.
“One Child Nation”
Nanfu Wang and Jialing Zhang’s extraordinary documentary “One Child Nation” takes an unflinching look at China’s controversial one-child policy. The film’s poster visually represents that policy in chilling fashion. Any fan of HBO’s “The Leftovers” will surely appreciate this one.
Rick Alverson’s “The Mountain” stars Tye Sheridan as a young man who loses his mother and goes to a doctor who specializes in lobotomies and therapies (Jeff Goldblum). The movie’s poster evokes a sense of fading memories and indentities that is key to the film’s storyline.
The more attention-grabbing “Joker” posters couldn’t compete with the effect of the movie’s one-sheet teaser. By showing just a sneak peek of Joaquin Phoenix’s full Joker face makeup, the poster effortlessly draws the viewer’s curiosity.
The poster for Christian Petzold’s remarkable “Transit” tells you everything you need to know about how a mysterious woman (Paula Beer) will bury herself into the soul of the film’s main character (Franz Rogowski).
Jordan Peele’s “Us” had one of the best posters of 2018 thanks to its inkblot teaser, and the film’s marketing strengths continued into this year with a striking official one-sheet featuring Lupita Nyong’o. The “Us” poster communicates the film’s plot more intriguingly than any trailer could.
“Wonder Woman 1984”
“Wonder Woman 1984” was supposed to debut in November 2019 before Warner Bros. delayed it until summer 2020, but at least Diana (Gal Gadot) got to make her mark this year with this eye-popping teaser poster. Perfectly capturing the energy of its time period while teasing major plot developments (that’s the Gold Armor!), this is how you tease a superhero tentpole.
“A Hidden Life”
Terrence Malick returned in 2019 with “A Hidden Life,” easily his best achievement since “The Tree of Life.” The film is an intimate character study about the resilience of the human spirit, a theme Fox Searchlight’s official poster manages to evoke.
Just before Netflix debuted Dan Gilroy’s “Velvet Buzzsaw” at the Sundance Film Festival, the streaming giant dropped a poster that proved how a simple approach can often be the most powerful. Does putting a title of a movie inside a frame qualify as art? The “Buzzsaw” poster asks the kind of questions the film is hellbent on exploring.
Photo:The Cinema Guild
The Cinema Guild distributed Hong Sang-soo’s “Grass” this year and issued a poster that wonderfully captured its unique beauty. The film stars Hong’s recent muse Kim Min-hee as a cafe worker who draws inspiration for her writing from customers she observes. The poster is a visual representation of the film’s plot and Hong’s witty, observational style.
“Queen and Slim”
Universal’s official poster for Melina Matsoukas’ “Queen and Slim” presents the eponymous characters played by Oscar nominee Daniel Kaluuya and breakout actress Jodie Turner-Smith in all their coolness and prominence. The film’s script from Lena Waithe turns Queen and Slim into icons, and that’s what this poster feels like it’s doing, too.
A24’s poster for “The Souvenir” takes the notion of writer-director Joanna Hogg reflecting on her coming-of-age experience quite literally. Capturing the reflections of stars Honor Swinton Byrne and Tom Burke, the poster speaks volumes to the relationship that plays out at the heart of the story.
Many critics were quick to compare Sundance comedy “Greener Grass” to the works of David Lynch and John Waters, so it’s only appropriate IFC embraced these influences on the official poster. That white picket fence recalls Lynch’s “Blue Velvet,” but it’s clear from this image that “Greener Grass” has a wacky originality all its own.
Benh Zeitlin will finally return next year with “Wendy,” which is debuting eight years after “Beasts of the Southern Wild.” The movie is debuting at the 2020 Sundance Film Festival. Fox Searchlight’s teaser poster for “Wendy” feels like a burnt photograph that has captured the forward momentum of youth in one still image.
“The Death of Dick Long”
A24 had major hits like “Midsommar” and “The Farewell” in 2019, but one of the distributor’s more overlooked titles was “The Death of Dick Long.” The poster for the movie is at once elegant and immature, a tonal mixture that is true to the spirit of filmmaker Daniel Scheinert (one half of the filmmaking team behind “Swiss Army Man”).
Even for those viewers unaware “Honey Boy” is an autobiographical drama about Shia LaBeouf, this clever poster with its old vaudeville aesthetic is a nifty tease about the dangers-of-showbiz storyline. Turning Lucas Hedges into a marionette doll is an easy metaphor, but it’s perfect in making clear what “Honey Boy” is selling.
“John Wick: Chapter 3 – Parabellum”
Three movies in and Lionsgate knows how to sell the “John Wick” franchise to fans with an action-packed punch. If this ingenious one-sheet for “John Wick: Chapter 3 – Parabellum” doesn’t get your blood pumping then nothing will.
“Lucy in the Sky”
Noah Hawley’s feature directorial debut “Lucy in the Sky” received some of the worst reviews of 2019, and its horrendous box office was nothing to brag about either. If only the movie lived up to its impressive first trailer and poster.
Oscilloscope’s poster for Justin Chon’s wonderful indie “Ms. Purple” does justice to the film’s story of a young woman who must pick up the pieces of her life and reconnect with her estranged brother during the final days of their father’s life.
“The Last Black Man in San Fransisco”
The fifth A24 movie poster to make our list of the year’s best one-sheets is this illustrated beauty for Joe Talbot’s Sundance sensation “The Last Black Man in San Francisco.”
BONUS: “Ad Astra” (Dolby Cinema)
In case you needed more proof the “Ad Astra” marketing campaign was one of the year’s best.
Right now, for those who use to do software back in the shareware days, this battle seems odd and unfair – Apple seems a giant that push against smaller players to keep its dominance in the market, but they give us, developers, a way to make money in the software arena without dying trying because the piracy that kill many of our projects – but the true is that is really a fight of someone who is trying to make the most money of their devices to make happy their shareholders, and someone who is trying hard to make more money to make happy their shareholders, a fight of someone who managed to create a profitable market for developers, and someone who get later into the boat, now challenging the captain for a better place in its boat. Call me fool but i really believe the company that make many of us, a few dollars in our pockets thanks to the app store deserve to keep their business, but hey, i am just a humble developer that was able to make a few apps profitable in a market that before didnt exist.
I hope Epic understand that or, better, make their own device – they can, but i hardly believe anyone will buy it. Now, from the mighty BBC we have this amazing article:
Apple has fired back against claims by the maker of the Fortnite game that its control of the App Store gives it a monopoly.
In a response to the August lawsuit filed by Epic Games, Apple called those arguments “self-righteous” and “self-interested”.
It denied that its 30% commission was anti-competitive and said the fight was “a basic disagreement over money”.
Apple also said Epic Games had violated its contract and asked for damages.
The filing is the latest in a legal battle that started last month, after Fortnite offered a discount on its virtual currency for purchases made outside of the app, from which Apple receives a 30% cut.
In response, Apple blocked Epic’s ability to distribute updates or new apps through the App Store, and Epic sued, alleging that Apple’s App Store practices violate antitrust laws.
The court allowed Apple’s ban on updates to continue as the case plays out, but the existing version of Fortnite still works, as does Epic’s payment system.
Apple had said it would allow Fortnite back into the store if Epic removed the direct payment feature to comply with its developer agreement.
But Epic has refused, saying complying with Apple’s request would be “to collude with Apple to maintain their monopoly over in-app payments on iOS.”
In its filing, Apple said Epic has benefited from Apple’s promotion and developer tools, earning more than $600m (£462m) through the App Store.
Apple accused the firm, which it noted is backed by Chinese tech giant Tencent, of seeking a special deal before ultimately breaching its contract with the update.
“Although Epic portrays itself as a modern corporate Robin Hood, in reality it is a multi-billion dollar enterprise that simply wants to pay nothing for the tremendous value it derives from the App Store,” it said in the filing.
The legal battle between the two companies comes as Apple faces increased scrutiny of its practices running the App Store.
At a hearing in Washington over the summer, politicians also raised concerns that Apple’s control of the app store hurt competition.
Seems a bad day to be CCP spy overseas, when the most powerful tool against the american dominance in the app market is banned in the most profitable market -BTW the US-, now the most powerful tool for the China’s world dominance – i mean, making people do videos dancing and sharing things that are really useless seems to be the way to dominate the world -really? i believe Keep Up with the Kardassians was the way to achieve that goal – but, in the end, seems that China want to keep its most successful app alive in the US market and somehow, they will not suceed this time – well, they are still making all the iPhones, but who cares about that, if there is a bunch of backdoors on those devices, we are really screw -.
Apple Insider, one of my favorites Mac related media sites, publish this amazing article about that…
The Department of Justice in a court filing on Friday opposed TikTok’s requested injunction against an impending ban authorized by the Trump administration, saying a decision in TikTok’s favor would weaken the president’s power during a claimed national security emergency.
TikTok is facing a rapidly approaching deadline as Trump’s executive order calling for a ban of the popular social media app takes effect at 11:59 p.m. on Sunday. The company attempted to stop the measure by filing an emergency injunction with the U.S. District Court for the District of Columbia this week, and now the DOJ has responded.
The Justice Department argues that blocking the ban will “infringe on the President’s authority to block business-to-business economic transactions with a foreign entity in the midst of a declared national-security emergency,” reportsThe Verge. Today’s late-night filing was heavily redacted, but snippets show an aggressive defense of Trump’s order.
Referring to the supposed security threat that TikTok poses, the agency called Zhang Yiming, CEO of TikTok parent company ByteDance, a “mouthpiece” for the Chinese Communist Party (CCP). He is allegedly “committed to promoting the CCP’s agenda and messaging,” according to the DOJ.
The argument for banning TikTok in large part deals with user data, specifically where said information is stored. While a section detailing where the agency believes TikTok is holding U.S. data is redacted, a legible portion notes “US user data being stored outside of the United States presents significant risks in this case.”
TikTok is looking to avoid an all-out U.S. ban and is in the middle of hammering out a deal with Oracle. A tentative version of the agreement was approved “in concept” by Trump last week.
Terms of the arrangement call for Oracle and its partners to receive a 20% stake in a U.S. TikTok entity, with the remaining 80% held by ByteDance. Oracle will also be granted access to TikTok’s source code to ensure the software does not include backdoors. U.S. Secretary of State Mike Pompeo this week said the new business would be “controlled by Americans,” with ByteDance acting as a “passive shareholder.”
Following news of the deal, Secretary of Commerce Wilbur Ross delayed enactment of Trump’s executive order by one week to Sept. 27.
Lavender Bay Boatshed is cool contemporary style home designed by Stephen Collier Architects located in Lavender Bay, a harbourside suburb on the lower North Shore of Sydney, New South Wales, Australia.
This awesome house is nice example how imagination can turn almost every object into cool home. Living right next to the water is a must for some people and this is the chance where creativity kicks in. Architects designed interiors to fit every need of its residents. Wooden roof structure gives another level of charm to this home.
Existing floor plan offered interesting shape where curves challenged architects which created this space into creative modern stairs. There’s also priceless Sydney Bay Bridge view.
The New Apple Ipad Air seems to be the direct answer of the company to the evolving changes in tablet use that is making a big shift in the market, the Apple Pencil integration is very focused on this area – an item that not so long ago was only meant for the iPad Pro – making this device something more oriented to pro users – without actually leaving the casual users behind – so, if your idea of this device is using it only as a child distracting device, a netflix screen or a pacifier, apart from being an amazing productivity tool, this device is for you.
The new iPad Air 4 (2020) marks a big change for Apple’s ‘light-as-air’ line of tablets – no longer is it an ungainly version of the ‘standard’ iPads, but it’s now more like a specced-down iPad Pro.
This 2020 model in the line, the iPad Air 4, got shown off at Apple’s September event alongside the entry-level iPad (2020), the Apple Watch 6 and Apple Watch SE. It was certainly the most premium product shown off at the event, and maybe the most intriguing too.
The rapid growth of technology influences design trends every year. As designers we need be aware of the existing and upcoming design trends, constantly learning, improving and expanding our design toolkit in order to be up to date on the current market. Based on my research, experience and observations I’ve selected very carefully 8 UI/UX design trends that you should watch in 2020. Let’s get started then! 🙂
#1 Animated Illustrations
Illustrations have been in digital product design for a long time. Their evolution in the last years is very impressive. Illustrations as very popular design elements add natural feel and “human touch” to overall UX of our products. Illustrations are also very strong attention grabbers: at the top of that by applying motion to these illustrations we might bring our products to the life and make them stand out— adding extra details and personality.
Another benefit of applying motion is capturing users attention and making users engage with your product. Animations are also one of the most effective ways to tell the story about your brand, product or services.
Microinteractions exist pretty much in every single app or website. You see them every time when you’re opening your favourite app —for instance Facebook has tons of different microinteractions and I assume that the “Like” feature is just the perfect example. Sometimes we are not even aware of existence, because they are so so obvious, natural and “blended” into user interfaces. Although, If you remove them from your product you will notice very quickly that something really important is missing.
Generally speaking, in UI/UX design sometimes even really small and subtle change might make huge impact. Microinteractions are the perfect proof that details and attention to them might greatly improve the overall user experience of your digital products and place them on the next/higher level. Every year, every new device brings new oppurtinitues for creating brand new and innovative microinteractions. 2020 wouldn’t be exception for sure.
#3 3D Graphics in web and mobile interfaces
3D graphic exist pretty much everywhere — in movies, video games, adverts on the streets. 3D graphic has been introduced few decades ago and since then has improved and evolved dramatically. Mobile and web technology is also growing rapidly fast. New web browser capabilities have opened the door for 3D graphic allowing us as designers to create and implement amazing 3D graphics into modern web and mobile interfaces.
Creating and then integration of 3D graphic into web and mobile interfaces requires some specific skills and tons of work, but very often the results are very rewarding.
3D graphic renders allows to present the product or services in the a lot more interactive and engaging way: for instance 3D graphic renders could be viewed in 360 degree presentation improving the overall UX of the product.
In 2020 even more brands will use 3D render models to present the product or services in order to emulate the real world (in-store) shopping experience.
#4 Virtual Reality
2019 has been a big year for VR. In the last years we have seen a lot of progress and excitement in VR headsets — mostly in gaming industry. We need to keep in mind that gaming industry very often brings innovation and new technologies into digital product design. Research proves that VR is no exception as after Oculus Quest in 2019 launch many opportunities have opened for other industries. Facebook CEO Mark Zuckerberg has already tested exciting hand interaction feature and officially announced hand-tracking update for Quest, coming early 2020!
Sony and Microsoft will release their new generation consoles in 2020 holiday season. These would bring a lot opportunities and room to growth for VR technology.
#5 Augmented Reality
In the last years we have seen a lot of progress, excitement and improvement of AR. The world’s leading tech companies are investing millions into AR development , so we should expect to expand and grow this technology in 2020. Even Apple has introduced their own AR toolkit called ARKIT 3 to help designers and developers build AR based products.
There are endlessopportunities to innovate and create brand new and exciting experiences in AR space. UI design for AR will be one of the major trends in 2020, so as designers we should be prepared and eager to learn new tools, principles when creating AR experiences.
Generally speaking Skeuomorphic design refers to the design elements that are created in a realistic style/way to match the real life objects. The growth of VR/AR technology and latest design trends shown on the most popular design platforms (Dribbble, Behance etc.) might make skeuomorphic design comeback in 2020 — but this time with a lot modern fashion and slightly modified name: “New skeuomorphism” (also called Neumorphism).
As you’ve probably noticed:Neumorphism represents very detailed and precise design style. Highlight, shadows, glows — attention to details is very impressive and definitely on spot. Neumorphism has already inspired a lot of designers from all over the world and there is big chance that Neumorphism will be the biggest UI design trend in 2020.
#7 Asymmetrical Layouts
In the last years we have noticed huge grown of asymmetrical layouts in digital product design. Traditional / “template” based layouts are definitely going away. 2020 will not be any different as this trend will continue. Proper usage of asymmetrical layouts add a lot of character, dynamic and personality to our designs, so they do not template based anymore.
There is a lot of room for creativity as the number of options and opportunities when creating asymmetrical layouts are endless. Although, creating successful asymmetrical layouts requires some practice and time — placing elements randomly on the grid wouldn’t work 🙂 also they should be used and implemented with care — always keeping in mind users needs : we do not want to get them lost when using our digital products — do we? 🙂
Stories play an very important role in overall UX in the digital product design. You might see them very often on the landing pages as introduction to the brand, product or new service. Storytelling is all about transferring data to the users in the best possible informative and creative way. This could be achieved by copyrighting mixed with strong and balanced visual hierarchy (typography, illustrations, high-quality photos, bold colours, animations and interactive elements).
Storytelling really helps to create positive emotions and relationships between your brand and users. Storytelling might also make your brand a lot more memorable and making users feel like they are part of our products or services, so they would like to associate with them. Having said that, storytelling is also great and efficient marketing tool that might greatly increase the sales of your products/services. Storytelling as the very successful tool will continue and expand in 2020.
La Google Play Store se vuelve a llenar de aplicaciones maliciosas, ahora si, esas fotos privadas y esos nudes tan suculentos estarán de la mano de tanta gente buena onda en Youporn y Porhub.
¿Se acuerdan que Google había limpiado la Play Store? Pues parece que no sirvió de mucho.
En ese mismo rango, Apple lanzó un comercial haciendo alusión a este problema de Android, y lamentablemente puede que hayan tenido algo de razón.
Sucede que después de que pasaran minuciosamente la escoba para limpiar la tienda de aplicaciones que no eran tal y que tenían solo como objetivo robar datos, bastó con una simple movida para que volvieran.
¿Qué hicieron estos desagradables desarrolladores? Muy simple, le cambiaron el nombre a sus aplicaciones y las volvieron a subir. Prácticamente todas están de vuelta.
Estos datos los entrega Symantec, añadiendo que el código malicioso es exactamente el mismo que se usaba antes.
Las aplicaciones malignas tienen forma de teclado de emoji, limpiadores de espacio, calculadoras, grabadoras de llamadas y más, aunque claro, ninguna hace lo prometido. Solo infectan el móvil.
La mayoría de los usuarios finales aprietan “aceptar” a todo lo que diga una aplicación cuando se abre por primera vez, dando acceso a veces a datos que no deberían tener. Por qué una calculadora querría acceso total a mis datos…
En fin, el consejo siempre es el mismo: eduquen bien a sus usuarios cercanos que sepan que no saben tanto. Muéstrenles cómo reconocer una app confiable.
In What Is Life? (1944), one of the fundamental questions the physicist Erwin Schrödinger posed was whether there was some sort of “hereditary code-script” embedded in chromosomes. A decade later, Crick and Watson answered Schrödinger’s question in the affirmative. Genetic information was stored in the simple arrangement of nucleotides along long strings of DNA.
The question was what all those strings of DNA meant. As most schoolchildren now know, there was a code contained within: adjacent trios of nucleotides, so-called codons, are transcribed from DNA into transient sequences of RNA molecules, which are translated into the long chains of amino acids that we know as proteins. Cracking that code turned out to be a linchpin of virtually everything that followed in molecular biology. As it happens, the code for translating trios of nucleotides into amino acids (for example, the nucleotides AAG code for the amino acid lysine) turned out to be universal; cells in all organisms, large or small—bacteria, giant sequoias, dogs, and people—use the same code with minor variations. Will neuroscience ever discover something of similar beauty and power, a master code that allows us to interpret any pattern of neural activity at will?
At stake is virtually every radical advance in neuroscience that we might be able to imagine—brain implants that enhance our memories or treat mental disorders like schizophrenia and depression, for example, and neuroprosthetics that allow paralyzed patients to move their limbs. Because everything that you think, remember, and feel is encoded in your brain in some way, deciphering the activity of the brain will be a giant step toward the future of neuroengineering.
Someday, electronics implanted directly into the brain will enable patients with spinal-cord injury to bypass the affected nerves and control robots with their thoughts (see “The Thought Experiment”). Future biofeedback systems may even be able to anticipate signs of mental disorder and head them off. Where people in the present use keyboards and touch screens, our descendants a hundred years hence may use direct brain-machine interfaces.
But to do that—to build software that can communicate directly with the brain—we need to crack its codes. We must learn how to look at sets of neurons, measure how they are firing, and reverse-engineer their message.
A Chaos of Codes
Already we’re beginning to discover clues about how the brain’s coding works. Perhaps the most fundamental: except in some of the tiniest creatures, such as the roundworm C. elegans, the basic unit of neuronal communication and coding is the spike (or action potential), an electrical impulse of about a tenth of a volt that lasts for a bit less than a millisecond. In the visual system, for example, rays of light entering the retina are promptly translated into spikes sent out on the optic nerve, the bundle of about one million output wires, called axons, that run from the eye to the rest of the brain. Literally everything that you see is based on these spikes, each retinal neuron firing at a different rate, depending on the nature of the stimulus, to yield several megabytes of visual information per second. The brain as a whole, throughout our waking lives, is a veritable symphony of neural spikes—perhaps one trillion per second. To a large degree, to decipher the brain is to infer the meaning of its spikes.
But the challenge is that spikes mean different things in different contexts. It is already clear that neuroscientists are unlikely to be as lucky as molecular biologists. Whereas the code converting nucleotides to amino acids is nearly universal, used in essentially the same way throughout the body and throughout the natural world, the spike-to-information code is likely to be a hodgepodge: not just one code but many, differing not only to some degree between different species but even between different parts of the brain. The brain has many functions, from controlling our muscles and voice to interpreting the sights, sounds, and smells that surround us, and each kind of problem necessitates its own kinds of codes.
A comparison with computer codes makes clear why this is to be expected. Consider the near-ubiquitous ASCII code representing the 128 characters, including numbers and alphanumeric text, used in communications such as plain-text e-mail. Almost every modern computer uses ASCII, which encodes the capital letter A as “100 0001,” B as “100 0010,” C as “100 0011,” and so forth. When it comes to images, however, that code is useless, and different techniques must be used. Uncompressed bitmapped images, for example, assign strings of bytes to represent the intensities of the colors red, green, and blue for each pixel in the array making up an image. Different codes represent vector graphics, movies, or sound files.
Evidence points in the same direction for the brain. Rather than a single universal code spelling out what patterns of spikes mean, there appear to be many, depending on what kind of information is to be encoded. Sounds, for example, are inherently one-dimensional and vary rapidly across time, while the images that stream from the retina are two-dimensional and tend to change at a more deliberate pace. Olfaction, which depends on concentrations of hundreds of airborne odorants, relies on another system altogether. That said, there are some general principles. What matters most is not precisely when a particular neuron spikes but how often it does; the rate of firing is the main currency.
Consider, for example, neurons in the visual cortex, the area that receives impulses from the optic nerve via a relay in the thalamus. These neurons represent the world in terms of the basic elements making up any visual scene—lines, points, edges, and so on. A given neuron in the visual cortex might be stimulated most vigorously by vertical lines. As the line is rotated, the rate at which that neuron fires varies: four spikes in a tenth of a second if the line is vertical, but perhaps just once in the same interval if it is rotated 45° counterclockwise. Though the neuron responds most to vertical lines, it is never mute. No single spike signals whether it is responding to a vertical line or something else. Only in the aggregate—in the neuron’s rate of firing over time—can the meaning of its activity be discerned.
This strategy, known as rate coding, is used in different ways in different brain systems, but it is common throughout the brain. Different subpopulations of neurons encode particular aspects of the world in a similar fashion—using firing rates to represent variations in brightness, speed, distance, orientation, color, pitch, and even haptic information like the position of a pinprick on the palm of your hand. Individual neurons fire most rapidly when they detect some preferred stimulus, less rapidly when they don’t.
To make things more complicated, spikes emanating from different kinds of cells encode different kinds of information. The retina is an intricately layered piece of nervous-system tissue that lines the back of each eye. Its job is to transduce the shower of incoming photons into outgoing bursts of electrical spikes. Neuroanatomists have identified at least 60 different types of retinal neurons, each with its own specialized shape and function. The axons of 20 different retinal cell types make up the optic nerve, the eye’s sole output. Some of these cells signal motion in several cardinal directions; others specialize in signaling overall image brightness or local contrast; still others carry information pertaining to color. Each of these populations streams its own data, in parallel, to different processing centers upstream from the eye. To reconstruct the nature of the information that the retina encodes, scientists must track not only the rate of every neuron’s spiking but also the identity of each cell type. Four spikes coming from one type of cell may encode a small colored blob, whereas four spikes from a different cell type may encode a moving gray pattern. The number of spikes is meaningless unless we know what particular kind of cell they are coming from.
And what is true of the retina seems to hold throughout the brain. All in all, there may be up to a thousand neuronal cell types in the human brain, each presumably with its own unique role.
Wisdom of Crowds
Typically, important codes in the brain involve the action of many neurons, not just one. The sight of a face, for instance, triggers activity in thousands of neurons in higher-order sectors of the visual cortex. Every cell responds somewhat differently, reacting to a different detail—the exact shape of the face, the hue of its skin, the direction in which the eyes are focused, and so on. The larger meaning inheres in the cells’ collective response.
A major breakthrough in understanding this phenomenon, known as population coding, came in 1986, when Apostolos Georgopoulos, Andrew Schwartz, and Ronald Kettner at the Johns Hopkins University School of Medicine learned how a set of neurons in the motor cortex of monkeys encoded the direction in which a monkey moves a limb. No one neuron fully determined where the limb would move, but information aggregated across a population of neurons did. By calculating a kind of weighted average of all the neurons that fired, Georgopoulos and his colleagues found, they could reliably and precisely infer the intended motion of the monkey’s arm.
One of the first illustrations of what neurotechnology might someday achieve builds directly on this discovery. Brown University neuroscientist John Donoghue has leveraged the idea of population coding to build neural “decoders”—incorporating both software and electrodes—that interpret neural firing in real time. Donoghue’s team implanted a brushlike array of microelectrodes directly into the motor cortex of a paralyzed patient to record neural activity as the patient imagined various types of motor activities. With the help of algorithms that interpreted these signals, the patient could use the results to control a robotic arm. The “mind” control of the robot arm is still slow and clumsy, akin to steering an out-of-alignment moving van. But the work is a powerful hint of what is to come as our capacity to decode the brain’s activity improves.
Among the most important codes in any animal’s brain are the ones it uses to pinpoint its location in space. How does our own internal GPS work? How do patterns of neural activity encode where we are? A first important hint came in the early 1970s with the discovery by John O’Keefe at University College in London of what became known as place cells in the hippocampus of rats. Such cells fire every time the animal walks or runs through a particular part of a familiar environment. In the lab, one place cell might fire most often when the animal is near a maze’s branch point; another might respond most actively when the animal is close to the entry point. The husband-and-wife team of Edward and May-Britt Moser discovered a second type of spatial coding based on what are known as grid cells. These neurons fire most actively when an animal is at the vertices of an imagined geometric grid representing its environment. With sets of such cells, the animal is able to triangulate its position, even in the dark. (There appear to be at least four different sets of these grid cells at different resolutions, allowing a fine degree of spatial representation.)
Other codes allow animals to control actions that take place over time. An example is the circuitry responsible for executing the motor sequences underlying singing in songbirds. Adult male finches sing to their female partners, each stereotyped song lasting but a few seconds. As Michale Fee and his collaborators at MIT discovered, neurons of one type within a particular structure are completely quiet until the bird begins to sing. Whenever the bird reaches a particular point in its song, these neurons suddenly erupt in a single burst of three to five tightly clustered spikes, only to fall silent again. Different neurons erupt at different times. It appears that individual clusters of neurons code for temporal order, each representing a specific moment in the bird’s song.
Unlike a typewriter, in which a single key uniquely specifies each letter, the ASCII code uses multiple bits to determine a letter: it is an example of what computer scientists call a distributed code. In a similar way, theoreticians have often imagined that complex concepts might be bundles of individual “features”; the concept “Bernese mountain dog” might be represented by neurons that fire in response to notions such as “dog,” “snow-loving,” “friendly,” “big,” “brown and black,” and so on, while many other neurons, such as those that respond to vehicles or cats, fail to fire. Collectively, this large population of neurons might represent a concept.
An alternative notion, called sparse coding, has received much less attention. Indeed, neuroscientists once scorned the idea as “grandmother-cell coding.” The derisive term implied a hypothetical neuron that would fire only when its bearer saw or thought of his or her grandmother—surely, or so it seemed, a preposterous concept.
But recently, one of us (Koch) helped discover evidence for a variation on this theme. While there is no reason to think that a single neuron in your brain represents your grandmother, we now know that individual neurons (or at least comparatively small groups of them) can represent certain concepts with great specificity. Recordings from microelectrodes implanted deep inside the brains of epileptic patients revealed single neurons that responded to extremely specific stimuli, such as celebrities or familiar faces. One such cell, for instance, responded to different pictures of the actress Jennifer Aniston. Others responded to pictures of Luke Skywalker of Star Wars fame, or to his name spelled out. A familiar name may be represented by as few as a hundred and as many as a million neurons in the human hippocampus and neighboring regions.
Such findings suggest that the brain can indeed wire up small groups of neurons to encode important things it encounters over and over, a kind of neuronal shorthand that may be advantageous for quickly associating and integrating new facts with preëxisting knowledge.
If neuroscience has made real progress in figuring out how a given organism encodes what it experiences in a given moment, it has further to go toward understanding how organisms encode their long-term knowledge. We obviously wouldn’t survive for long in this world if we couldn’t learn new skills, like the orchestrated sequence of actions and decisions that go into driving a car. Yet the precise method by which we do this remains mysterious. Spikes are necessary but not sufficient for translating intention into action. Long-term memory—like the knowledge that we develop as we acquire a skill—is encoded differently, not by volleys of constantly circulating spikes but, rather, by literal rewiring of our neural networks.
That rewiring is accomplished at least in part by resculpting the synapses that connect neurons. We know that many different molecular processes are involved, but we still know little about which synapses are modified and when, and almost nothing about how to work backward from a neural connectivity diagram to the particular memories encoded.
Another mystery concerns how the brain represents phrases and sentences. Even if there is a small set of neurons defining a concept like your grandmother, it is unlikely that your brain has allocated specific sets of neurons to complex concepts that are less common but still immediately comprehensible, like “Barack Obama’s maternal grandmother.” It is similarly unlikely that the brain dedicates particular neurons full time to representing each new sentence we hear or produce. Instead, each time we interpret or produce a novel sentence, the brain probably integrates multiple neural populations, combining codes for basic elements (like individual words and concepts) into a system for representing complex, combinatorial wholes. As yet, we have no clue how this is accomplished.
One reason such questions about the brain’s schemes for encoding information have proved so difficult to crack is that the human brain is so immensely complex, encompassing 86 billion neurons linked by something on the order of a quadrillion synaptic connections. Another is that our observational techniques remain crude. The most popular imaging tools for peering into the human brain do not have the spatial resolution to catch individual neurons in the act of firing. To study neural coding systems that are unique to humans, such as those used in language, we probably need tools that have not yet been invented, or at least substantially better ways of studying highly interspersed populations of individual neurons in the living brain.
It is also worth noting that what neuroengineers try to do is a bit like eavesdropping—tapping into the brain’s own internal communications to try to figure out what they mean. Some of that eavesdropping may mislead us. Every neural code we can crack will tell us something about how the brain operates, but not every code we crack is something the brain itself makes direct use of. Some of them may be “epiphenomena”—accidental tics that, even if they prove useful for engineering and clinical applications, could be diversions on the road to a full understanding of the brain.
Nonetheless, there is reason to be optimistic that we are moving toward that understanding. Optogenetics now allows researchers to switch genetically identified classes of neurons on and off at will with colored beams of light. Any population of neurons that has a known, unique molecular zip code can be tagged with a fluorescent marker and then be either made to spike with millisecond precision or prevented from spiking. This allows neuroscientists to move from observing neuronal activity to delicately, transiently, and reversibly interfering with it. Optogenetics, now used primarily in flies and mice, will greatly speed up the search for neural codes. Instead of merely correlating spiking patterns with a behavior, experimentalists will be able to write in patterns of information and directly study the effects on the brain circuitry and behavior of live animals. Deciphering neural codes is only part of the battle. Cracking the brain’s many codes won’t tell us everything we want to know, any more than understanding ASCII codes can, by itself, tell us how a word processor works. Still, it is a vital prerequisite for building technologies that repair and enhance the brain.
Take, for example, new efforts to use optogenetics to remedy a form of blindness caused by degenerative disorders, such as retinitis pigmentosa, that attack the light-sensing cells of the eye. One promising strategy uses a virus injected into the eyeballs to genetically modify retinal ganglion cells so that they become responsive to light. A camera mounted on glasses would pulse beams of light into the retina and trigger electrical activity in the genetically modified cells, which would directly stimulate the next set of neurons in the signal path—restoring sight. But in order to make this work, scientists will have to learn the language of those neurons. As we learn to communicate with the brain in its own language, whole new worlds of possibilities may soon emerge.
Christof Koch is chief scientific officer of the Allen Institute for Brain Science in Seattle. Gary Marcus, a professor of psychology at New York University and a frequent blogger for the New Yorker, is coeditor of the forthcoming book The Future of the Brain.
Facebook announced Thursday that it will begin to prioritize posts in the News Feed from friends and family over public content and posts from publishers. It will also move away from using “time spent” on the platform as a metric of success and will instead focus on “engagement” with content, such as comments.
Why it matters: Facebook is the most widely-used news and information platform in the world; almost half of Americans rely on it for news. These changes will significantly impact the way people around the world receive and distribute information, possibly limiting the spread of fake news.
Moving forward, Facebook will prioritize “posts that spark conversations and meaningful interactions” between people.
Pages will still remain in the News Feed, but they will likely see their reach, video watch time and referral traffic decrease.
Facebook Head of Product Adam Mosseri says the move is more about valuing stories that facilitate meaningful interactions between people.
The change will completely shift the publishing landscape, to the disadvantage of publishers that rely on the tech giant for traffic.
But, but, but: Facebook Journalism Project lead Campbell Brown told publishers in an email that the change will not affect links to publisher content shared by friends.
What this means for brands
In the short term, this will cause a tsunami of changes for everyone: Facebook, publishers, advertisers, investors, etc.
In the long term, it will force the entire digital ecosystem to focus on building meaningful relationships with consumers instead of click-bait. Audiences vs. traffic, as The Verge’s Casey Newton puts it.
“My initial reaction is it appears organic reach is finally moving toward zero,” says Rich Greenfield, Media Analyst at BTIG. “Zuckerberg is basically telling brands you either need to spark a meaningful, engaging conversation with your content — or spend ad dollars to reach consumers in the News Feed.
“It puts tremendous pressure/focus on great storytelling.”
Zuck’s mission: Bring back meaning
Most Americans admit to using Facebook for news, yet many say it’s the platform that they trust the least as a source for news.
As BuzzFeed’s Craig Silverman points out, the platform is not being used in the way its founder had envisioned, which Zuckerberg made clear to investors in his opening statement on his last earnings call.
The move to shift away from “time spent” as a metric for success is likely a response to that revelation, as it will force users to spend less time “passively scrolling” and more time facilitating conversations.
“When people are engaging with people they’re close to, it’s more meaningful, more fulfilling,” David Ginsberg, director of research at Facebook told The New York Times. “It’s good for your well-being.”
Traffic patterns show that Facebook has been planning a pivot to “meaningful engagement” for months.
The tech giant created the “Facebook Journalism Project” to mend its broken relationships with publishers a year ago, in anticipation of strategy changes.
It has been trying to convert premium publishers to its separate “Watch” video content tab since last year.
Executives have repeatedly told investors that News Feed inventory was becoming saturated, leading to slower ad load, and that they would focus on shifting publishers to video-based partnerships instead.
The publisher dilemma
Publishers, specifically those that rely on Facebook for the majority of their traffic, will probably be hit hardest by these changes in the short term.
However, most premium publishers have a healthy balance of traffic referrals across the ecosystem, according to a study from Parse.ly that measures referral traffic for medium to large-sized vetted publishers.
This is especially true for some of the larger, most established players that have diversified revenue models and traffic referral strategies.
Upstart publishers that have leaned on Facebook for audience in the past few years might be uniquely affected by the change, according to Parse.ly CTO Andrew Montalenti.
The bottom line
Meaningful engagement with the platform is not just a moral decision for Zuckerberg:
Facebook has seen younger audiences flock to Snapchatand other apps because they don’t feel a sense of intimacy with close connections and they don’t feel empowered to participate in meaningful conversations.
Until now, Facebook tried to acquire or copy competitorsthat innovated towards meaning.
Now, it’s taking a step to ensure users don’t abandon a platform that unintentionally got away from its mission.
This is the first meaningful response by a technology CEO to the looming “Techlash” against the giant technology companies controlling our lives:
What to watch
This will create a new wave of publishers and technology focused on direct-to-consumer interactions.
Expect artificial intelligence and chatbots to gain more traction as brands and publishers try to figure out the best ways to facilitate meaningful conversation and engagement.
Publishers will pivot away from meaningless short-form video, because the update will weed out publisher video from the News Feed if it doesn’t drive meaningful conversations. Expect instead for publishers to invest in quality, on-demand video on Facebook Watch.