AWS took a lot of heat when its S3 storage component went down for several hours on Tuesday, and rightly so, but today they published a post-mortem explaining exactly what happened complete with technical details and how they plan to prevent a similar event from occurring again in the future.
At the core of the problem was unsurprisingly human error. Some poor engineer, we’ll call him Joe, was tasked with entering a command to shut down some storage sub-systems. On a typical day this doesn’t cause any issue whatsoever. It’s a routine kind of task, but on Tuesday something went terribly wrong.
Joe was an authorized user, and he entered the command according to procedure based on what Amazon calls “an established playbook.” The problem was that Joe was supposed to issue a command to take down a small number of servers on an S3 sub-system, but he made a mistake, and instead of taking down just that small set of servers, Joe took down a much larger set.
In layman’s terms, that’s when all hell broke loose.
Amazon explains it much more technically, but suffice to say that error had a cascading impact on the S3 storage in the Northern Virginia datacenter. To make a long story short, Joe’s error took down some crucial underlying sub-systems, which removed a significant amount of storage capacity, which caused the systems to restart. As this happened, S3 couldn’t service requests, which caused even AWS’s own dashboard to go down (which is, you know, kind of embarrassing).
By now, the outside world started to feel the impact and your favorite websites, apps and cloud services were beginning to behave in a wonky fashion.
The day Amazon S3 storage stood stillAmazon AWS S3 outage is breaking things for a lot of websites and appsWhy AWS has such a big lead in the cloudWTF is cloud computing?
As the afternoon wore on, the company was working feverishly to get the service back online, but the size of the systems was working against them. When the system shut down, something that AWS says it hasn’t had to do in many years, it became a victim of its own success. S3 capacity had grown to such an extent in the affected datacenter that when they restarted, running all of the safety checks and validating the integrity of the underlying metadata took a mite bit longer than they expected.
To reduce the prospect of a similar human error in the future, the company is making some changes. In their words, “We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level.” That should prevent someone like Joe from making a similar mistake in the future.
In addition, AWS is looking at ways to break down those S3 sub-systems, which were core to the problem, into much smaller pieces or cells, as they call them, something they have tried to do in the past. Obviously, the sub-systems proved too large to recover quickly (or at least quickly enough).
They close with an apology and a promise to do better. In the end, it was a combination of factors that caused the issue, starting with a human error and then cascading across systems that hadn’t been designed to deal with an error of this magnitude.
Kids growing up in low-income neighborhoods have always faced extra challenges when it comes to keeping up with their middle- to high-income peers. And with the dawning of the digital age, low-income students now face a new, unprecedented challenge: access to high-speed internet.
More than ever, students are required to go online to complete homework, collaborate on projects and conduct research. For students with no internet access at home, this can be a daunting challenge. Even though 85 percent of the nation has access to broadband internet, one White House release noted that less than half of the households in the lowest income bracket have an internet subscription at home.
“While many middle-class U.S. students go home to internet access…too many lower-income children go unplugged every afternoon when school ends,” according to the White House release. “This ‘homework gap’ runs the risk of widening the achievement gap, denying hardworking students the benefit of a technology-enriched education.”
In response, the White House launched ConnectHome in 2015. This initiative set out to bring high-speed internet to more than 275,000 low-income families in 28 communities across the nation. Likewise, companies like Microsoft, Google and Comcast have rolled out programs to help bridge the digital divide.
Although these large-scale programs are leading the way, it’s smaller, grassroots startups that will ensure the delivery of high-speed internet to every home in America. Startups and nonprofits across the country are stepping up to help low-income families connect to the internet. They deliver programs that aim to facilitate the personal, academic and professional growth of their local communities.
One of the most established organizations delivering internet access to those who can’t afford it is Connecting for Good. This nonprofit based in Kansas City formed in 2011, after Google announced that Kansas City would be the first city to receive Google Fiber. By the end of 2012, Connecting for Good had installed its first free Wi-Fi network, providing connectivity to almost 400 low-income residents.
In addition, the group provides digital literacy training and offers refurbished laptops for just $50. Connecting for Good is funded through grants and private donations, and one of its co-founders was named the manager of President Obama’s ConnectHome program.
In the Bronx, where one-third of the residents go without home internet, startup Neture, Inc. aims to provide reliable, high-speed internet access to all. Founded in 2014, Neture launched its third phase last summer, and currently provides a combination of free and low-cost internet to several Bronx neighborhoods. The company also provides free computers as amenities in some apartment complexes, and is devoted to teaching digital literacy.
Because the achievement gap is a mosaic of different challenges, some startups are focused more on teaching tech skills and harnessing available tech to improve student support. Even though these organizations aren’t providing internet access, the ultimate success of all of these ventures — and the children they hope to help — will depend on unilateral access to home internet.
Code Fever is a southern Florida startup on a mission to encourage under-resourced and minority students to learn coding. The company hopes to inspire these students, ages 13 to 21, to create their own tech-related startups in their communities and become the STEM (science, technology, engineering and math) leaders of the future. Code Fever started in 2013 and raised $75,000 through crowdfunding efforts.
Thrive Capital hires Obama’s Director of Product Josh Miller to focus on tech for the underprivilegedPresident Obama: “The Internet Is Not A Luxury, It Is A Necessity”Classkick Raises $1.7 Million To Tackle The Student Achievement Gap
TalkingPoints is another organization working to bridge the achievement gap with technology. The founders wanted to provide a way for parents, students and teachers to overcome language barriers, so they developed a multilingual texting platform that can translate text messages into the language of the recipient. Teachers can text parents in English, and the parents are able to read the message (and respond) in their own language. TalkingPoints is free to schools and supported by grants and private donations.
In a world that keeps getting faster and more tech-savvy every day, it’s crucial for all children to have access to the tools and resources they need to become productive, successful members of society. As time goes on, home internet access will become increasingly necessary for students to engage, grow and perform at school. Fortunately, government programs, big business initiatives and grassroots startups are already well on their way to making high-speed internet accessible to every household.
Zootopia has won the Oscar for Best Animated Feature at the 89th Academy Awards in Los Angeles on Sunday night. The Disney film, a buddy comedy between a rabbit police officer and a fox con artist set in a city full of anthropomorphic animals, was the frontrunner at Hollywood’s biggest night, having won the Golden Globe, Annie, and Critics’ Choice awards previously.
“Thank you Academy, this is an incredible honour. About five years ago, almost six now, we got this crazy idea about humanity with talking animals, in the hopes that when the film came out, it would make the world just a slightly better place,” Byron Howard, co-director on Zootopia, said in his acceptance speech.
“And we are so grateful to the audiences over the world who embraced this film, with this story of tolerance being more powerful than fear of the other,” co-director Rich Moore added.
Zootopia was one of our favourite movies from last year, not just for its heartfelt storytelling, but more so for how it tackled a variety of topical themes – gender stereotyping, xenophobia, and racial diversiry – in a kids’ friendly package.
The critical acclaim it garnered has helped the movie to its multiple award wins, but Zootopia was also the most commercially successful Oscar nominee with its $1-billion-plus at the worldwide box office, beating out Moana, Kubo and the Two Strings, My Life as a Zucchini, and The Red Turtle.
Directed by Byron Howard and Rich Moore, and co-directed by Jared Bush, the Disney animated adventure featured an extensive voice cast of Ginnifer Goodwin, Jason Bateman, Idris Elba, Jenny Slate, Nate Torrence, Bonnie Hunt, Don Lake, Tommy Chong, J. K. Simmons, Octavia Spencer, Alan Tudyk, and Shakira.
The Oscars are still on-going in Los Angeles, with some of the biggest awards – Best Picture, and Best Actor/ Actress among others – yet to be given out. Mahershala Ali won the first Oscar of the night – Best Supporting Actor – for his work on Moonlight. Suicide Squad took home Best Makeup and Hairstyling, while Fantastic Beasts and Where to Find Them won Best Costume Design.
Nokia sees demand for higher speed 4G network equipment starting to recover this year, led by Japan, the company’s chief executive Rajeev Suri said on Sunday as he announced a series of contracts with telecom operators.
Speaking at a news conference ahead of the Mobile World Congress in Barcelona, Suri also predicted a new wave of industry consolidation among telecom operators in the US and Indian markets in the course of 2017.
“Noise about carrier M&A will heat up dramatically in United States and India. The pent-up demand for action is there,” Suri said.
Nokia and its rivals, Sweden’s Ericsson and China’s Huawei, have struggled lately as telecom operators’ demand for faster 4G mobile broadband equipment has peaked, and upgrades to next-generation 5G equipment are still years away.
Nokia repeated that while it expected the global networks market to fall around 2 percent in 2017, it spotted growth opportunities in markets such as North America, India and Japan.
“We believe that the (overall) primary market in which we compete will be down again… but to be considerably better than last year,” Suri said, anticipating a slower rate of decline.
“Investments in 4G, particularly in advanced 4G technology, will pick back up in some key markets, such as Japan.”
Earlier this month, Nokia reported its profits for the final quarter of last year fell less than expected, helped by cost cuts and the acquisition of Alcatel-Lucent.
The Finnish company has reached a “landmark”, 3-year deal with Telefonica to build networks in London, Suri said on Sunday, adding that the contract propels Nokia to overcome Ericsson as the leading network supplier in Britain.
Nokia also announced that it was working with US telecom carrier Verizon and semiconductor giant Intel to supply equipment for pre-commmercial 5G services in US markets, including Dallas.
Suntrust analyst Georgios Kyriakopoulos cautioned that global weakness in operator spending will likely remain for a long time and that projected consolidation will likely serve as a further drag on results for equipment vendors such as Nokia.
“The fact Suri predicted more M&A in that space means Nokia’s core business faces some challenges, he said.
Meanwhile, Japan’s SoftBank Group Corp is prepared to cede control of Sprint Corp to Deutsche Telekom AG’s T-Mobile US Inc to clinch a merger of the two US mobile carriers, sources told Reuters earlier this month.
Have you ever thought of having a Tablet that can be stretched from small to a large size, or a wallpaper that turns a wall into an electronic display? This imagination may soon be a future reality.
Engineering researchers at Michigan State University (MSU) have developed the first stretchable integrated circuit that is made entirely using an inkjet printer, raising the possibility of inexpensive mass production of smart fabric.
“We can conceivably make the costs of producing flexible electronics comparable to the costs of printing newspapers. Our work could soon lead to printed displays that can easily be stretched to larger sizes, as well as wearable electronics and soft robotics applications,” said Chuan Wang, Assistant Professor at MSU.
Since the material can be produced on a standard printer, it has a major potential cost-advantage over current technologies that are expensive to manufacture.
According to the researchers, the smart fabric is made up of several materials fabricated from nanomaterials and organic compounds.
“These compounds are dissolved in solution to produce different electronic inks, which are run through the printer to make the devices,” a paper published in the journal ACS Nano noted.
Researchers created an elastic material, the circuit and the organic light-emitting diode, or OLED from the ink.
Researchers estimate that in a year or two, they will be able to combine the circuit and OLED into a single pixel, and once that is done, the smart fabric can be potentially commercialised.
“Conceivably, the stretchable electronic fabric can be folded and put in one’s pocket without breaking. This is an advantage over current ‘flexible’ electronics material technology that cannot be folded,” added Wang.
“We have taken it one big step beyond the flexible screens that are about to become commercially available,” Wang added.
Mark Zuckerberg has revealed deep-seated concerns that the tide is turning against globalisation.
In an interview with the BBC on Thursday, the Facebook founder said that fake news, polarised views and “filter bubbles” were damaging “common understanding”.
He said people had been left behind by global growth, sparking demands to “withdraw” from the “connected world”.
In a call to action, he said people must not “sit around and be upset”, but act to build “social infrastructures”.
“When I started Facebook, the mission of connecting the world was not controversial,” he said.
“It was as if it was a default assumption that people had; every year the world got more connected and that seems like the direction things were heading in.
“Now that vision is becoming more controversial.”
He told the BBC: “There are people around the world that feel left behind by globalisation and the rapid changes that have happened, and there are movements as a result to withdraw from some of that global connection.”
Zuckerberg’s interview comes alongside the publication of a 5,500-word letter he has written about the future of Facebook and the global economy.
In it, Zuckerberg quotes Abraham Lincoln who spoke of acting “in concert”, and talks about “spiritual needs”, civic engagement and says that many people have “lost hope for the future”.
“For a couple of decades, may be longer, people have really sold this idea that as the world comes together everything is going to get better,” he said.
“I think the reality is that over the long term that will be true, and there are pieces of infrastructure that we can build to make sure that a global community works for everyone.
“But I do think there are some ways in which this idea of globalisation didn’t take into account some of the challenges it was going to create for people, and now I think some of what you see is a reaction to that.
“If people are asking the question, is the direction for humanity to come together more or not? I think that answer is clearly yes.
“But we have to make sure the global community works for everyone. It is not just automatically going to happen,” he said.
The long, confusing lifecycle of AMD’s beastly Radeon Pro Duo is quietly entering its final days as retailers clear the deck for the forthcoming Radeon Vega graphics cards.
The $1,500 MSRP Radeon Pro Duo sits reigns as AMD’s graphics champion with not one but two high-end Fiji graphics processors, exotic high-bandwidth memory, and integrated closed-loop water cooling that kept the board running at chilly temperatures. But the timing and messaging around the graphics card just felt wrong from day one.
AMD first teased a then-unnamed dual-Fiji graphics card at E3 2015—a decidedly consumer-focused gaming event—alongside the Fury, Fury X, and Radeon Nano. While those gamer-focused cards launched in relatively short order, the Radeon Pro Duo languished all the way until March of last year, when it released with a newfound focus on professional users and hellacious GPU compute chops.
That bummed out enthusiasts hoping to unleash the power of two Fury X GPUs in a single card, but it made sense for AMD to focus on development scenarios instead as the effective 4GB capacity of first-gen HBM memory would no doubt hinder gaming performance at the resolutions and detail settings that two Fiji GPUs could push. Plus, multi-GPU gaming has been punched in the gut over the past two years, with few major game releases supporting CrossFire or SLI setups, further diminishing the Radeon Pro Duo’s potential effectiveness.
Unfortunately the marketing for the Radeon Pro Duo was muddled and confusing right up until its launch, and once the ferocious GeForce GTX 1080 released a mere two months later there was little reason for many users to consider buying AMD’s technological champion no matter how impressive it was on paper. AMD never even sent the Pro Duo to consumer publications for testing.
Now that AMD’s publicly showing off its high-end Radeon Vega graphics cards, which pack performance upgrades and second-gen HBM memory that ditches the limitations of the initial product, the Radeon Pro’s days are numbered, as TechPowerUp pointed out. While Amazon’s selling the XFX Radeon Pro Duo for $1150, a $350 discount, Newegg’s selling the XFX Radeon Pro for $800, as is Japan’s PC4U—nearly half off!
Or at least it was. The remaining stock sold out quickly at that price on Newegg, proving yet again that there’s no such thing as bad hardware, only hardware at bad prices. The XFX Radeon Pro is available once again on Newegg, but for $820 now.
If you’re curious about what’s to come in Vega, Gordon Mah Ung and I recently chatted with Radeon SVP and chief architect Raja Koduri for more than 40 minutes at CES, drilling deep into the details of AMD’s next-gen graphics cards. Check out the full interview below.
Samsung Electronics on Monday blamed batteries supplied by two manufacturers for the overheating and even explosions of some Galaxy Note7 phones, as it tried to provide a long due explanation for the issues surrounding the smartphone.
The announcement by the company, a day ahead of it reporting its fourth quarter results, had experts from TUV Rheinland, Exponent and UL stating that internal manufacturing and design defects of the batteries, including missing insulating tape in some cases, and not the design of the phones were responsible for the battery issues.
The negative electrode windings in the battery of an unnamed “manufacturer A,” who first supplied the batteries for the Note7 phones, were found in some cases to be damaged and bent over because the cell pouch did not provide enough volume to accommodate the battery assembly, said Kevin White, Exponent’s principal scientist, at a press conference that was webcast.
There were signs of internal short circuit at different locations of the cells from five of the damaged devices, said Sajeev Jesudas, president of the consumer business unit of UL. He also pointed to deformation of the upper corners of the batteries, missing insulation tapes on the tabs, and the use of thin separators as some of the factors that could contribute to a short circuit.
After incidents were reported on the field, Samsung turned to another supplier, referred to by the company as “manufacturer B.” But welding defects in “some incident cells were found to be tall enough to bridge the distance to the negative electrode foil,” raising the possibility of short circuits and self-heating, White said.
Samsung turned to Amperex Technology in Hong Kong to supply batteries for the replacement Note7 phones after issues were reported with batteries supplied by affiliate Samsung SDI, the Wall Street Journal reported, citing people familiar with the matter.
Samsung’s team of investigators checked the Note7’s features such as fast charging, water resistance and its newly-introduced iris scanner for a possible role in the explosions but found those had not had an impact, said D.J. Koh, president of the Mobile Communications Business at Samsung.
More than 700 Samsung researchers and engineers tested over months over 200,000 Note7 phones and 30,000 phone batteries before arriving at their conclusions, he said.
In the wake of reports of overheating of the lithium-ion batteries, Samsung announced a global recall of the Note7 in early September after it found a “battery cell issue.” The U.S. Consumer Product Safety Commission also announced on Sept. 15 a recall in the U.S. of about 1 million Note7 phones.
The replacement phones Samsung shipped out also had battery issues leading the company to recall the phones again and end production of the device. By Oct. 13, CPSC had expanded the recall to include replacement Note7 phones that Samsung had supplied to customers under the first recall program.
Samsung said that 96 percent of about 3 million Galaxy Note7 phones “sold and activated” had been returned by users. As some customers had not returned the phones to the company, despite an offer of an exchange with other Samsung devices or a refund, it had to take recourse to working with cellular operators in some markets like the U.S. and Australia, to disconnect the phones from the network.
The Note7 recall was a public-relations and financial debacle for Samsung, which reported that the third quarter revenue of its IT and Mobile Communications division was down 15 percent from the same period last year to 22.5 trillion Korean won (US$19.8 billion) while operating profit fell 95 percent to 100 billion won, as a result of the discontinuation of the Note7.
The company now expects a turnaround in the fourth quarter, largely because of a better showing by its components business that includes memory chips and displays. In guidance issued earlier this month, the company said its profit has grown year-on-year by close to 50 percent in the quarter. Revenue for the quarter is expected to be about the same as in the fourth quarter of the previous year.
Samsung is trying to put the Note7 debacle behind it and may well succeed. “Most in the US and Europe had forgotten about it already. It’s China they really need to lean into and make sure this message sticks,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy.
To reassure customers, Samsung also discussed steps it was taking to ensure product quality at every level of product development, including an eight-point safety check for batteries. Teams will focus, for example, on key components and work with external advisers to make preventative checks for any issues.
A battery advisory group of external advisers made up of academic and research experts is expected to provide the company a “clear and objective perspective on battery safety and innovation.” The company is also introducing improved algorithms for managing battery charging temperature, and charging current and duration.
“I liked that they added new processes and enhanced others in the 8-step safety check,” said Moorhead. “The new software is very interesting, too. Even better was the board of advisors that are there to assist on future decisions.”
The future will be even more challenging as consumers are demanding thinner devices that have longer battery life, he added.
In the short term though there could be concerns from consumers about lithium-ion batteries after Samsung disclosed that two manufacturers had made serious mistakes. “This level of promotion will give some pause for a while as it relates to Li-ion devices, but as with most recalls, it will be forgotten in six months,” Moorhead said in an email.
Valve’s added quite a few quality-of-life improvements to Steam in the past few years, including the ability to install multiple games at the same time, tagging games you already own in the store so you don’t try to buy them again, and adding controller support to every game.
But last week’s update will be particularly welcome news to anyone with multiple hard drives: You can now move a game’s install folder to a different Steam Library folder from within Steam itself.
Prior to now, you had one of two options: Either uninstall the game and reinstall it on a new drive or (smarter) move the game’s install folder manually by drag-and-dropping it into a different Steam Library folder.
The problem is Steam didn’t like this latter option much. After moving an install, Steam needed to re-verify all of the game’s files one at a time to make sure nothing got screwed up. And Steam wouldn’t even register a game’s presence if you moved it over from an external drive, instead continuing to show you an “Install” button in the client until after it’d gone through this verification process.
Things are simpler now though. Right-click on a game, select Properties, then Local Files, and at the bottom you’ll see a button tagged “Move Install Folder.” Click it and you’ll be presented with a list of all your available Steam library locations. Easy.
Unfortunately there doesn’t seem to be a way to batch-move a bunch of games yet, so those looking to migrate an entire hard drive’s worth of games are probably better off using the ol’ drag-and-drop method for the time being. Hopefully that functionality is on the way. Still, Steam’s newfound feature a good start—especially for those with limited SSD space who are constantly moving installs around.
That’s how it felt when I sat down to play Torment: Tides of Numenera recently. I was the last one scheduled to use the PC that inXile had set up in the corner, and so I was told I could play pretty much as long as I wanted.
I took full advantage of that offer, wending my way through half a dozen discreet stories in “The Bloom,” a massive slug-like organism that contains an entire city inside its stomach, itself stuffed with multiple political factions, a transdimensional marketplace, and “The Gullet”—a place where those who’ve fallen out of favor with the Bloom are slowly digested and then spat back out, body intact but their memories gone.
Torment: Tides of Numenera is weird. As. Hell.
And that’s possibly the highest compliment I could pay. This is, after all, a successor to 1999’s Planescape: Torment, a game considered a classic in large part because of its nontraditional structure and setting. We’re delving into 17-year-old spoilers here, but Planescape: 1) Eschews the traditional “save-the-world” drama to focus on an intensely personal, philosophical story. 2) Has you play as a no-name character who has died-and-been-resurrected-sans-memories possibly hundreds of times. 3) Takes place across multiple “planes of existence” a.k.a. dimensions, including one memorable sequence wherein a city slips over the boundary between Neutral and Chaotic planes, with disastrous results.
Play Planescape nowadays (I finished it again two weeks ago) and it’s clunky and ugly and often obtuse. It’s still revered though, because in all the 40-odd years of video game history there is no other game quite like it.
I’m trying not to make any grand, sweeping statements about Torment: Tides of Numenera. I’ve only played a small fraction of the game. But what I’ve seen is spot-on, striking the same balance as its spiritual predecessor—weird and alien, but not in a random way. Quite the contrary. The Bloom has well-established rules, a structure you’ll come to know and understand across dozens of different conversations.
It’s a dense game. Not maybe so dense as Planescape, where dialogue often reads more like a series of philosophical dissertations than two characters talking. Tides of Numenera is better at hiding its subtext within the trappings of an actual conversation.
Still, there’s a lot of information hidden within the branches of any given dialogue. And if you choose to pursue it, digging to the end of every chain, the game notices: You’ll grow closer to the “Blue Tide.”
Think of Tides sort of like Alignments in Dungeons & Dragons, but linked to the context of an action rather than its morality. The Blue Tide is related to seeking out information, for instance, while Gold is related to sacrificing yourself for others, and Red means you follow your passions.
As I said though, these aren’t tied to morality. You see Gold and its relation to sacrifice, to empathy, and you think “Well that’s the Good Guy route”—and yet a Gold Tide can just as easily attach itself to false modesty, to sacrifice in the name of manipulation. Similarly, those with an affinity for the Red Tide are known only for their passions, whether it’s rushing into action against evil or good.
Me? I was Blue Tide with a mix of Indigo—the Tide associated with compromise and justice. And that’s where we get into another parallel between Planescape and Tides of Numenera: In three hours, I only got into a single battle.
See, another reason Planescape is so famous is that most often the “best” route through any given quest didn’t involve any combat at all. Unlike most RPGs, the most important stats were Wisdom, Intelligence, and Charisma. Even the final boss battle can be won simply by talking.
I’m not sure Tides of Numenera goes that far, but it definitely comes close. Even in the midst of battle, you can stop and talk, try to appease your enemies instead of killing them outright. You can talk your way through pretty much every quest it seems, given the right approach and stats.
Your character and party members each have points in Might, Speed, and Intellect, and it’s pretty obvious what those correlate to. What’s interesting is how Tides of Numenera resolves skill checks. Rather than simply comparing your stats to a number and giving you a Yes/No answer, Tides of Numeneralets you “spend” your stats to get a better result.
For instance: You’re trying to convince two groups from opposing factions to leave each other alone. You use a Might check, and your base stats give you a 40 percent chance of succeeding. Well, you have 8 points in Might, so you can spend 3 of them to raise that chance up to say 85 percent—a much better chance.
Those spent points aren’t gone forever. They return when you rest, which—like any other Infinity Engine-inspired RPG—suggests that you can abuse the rest system. But that’s a question for the full review. At the moment it at least seems like a more interesting idea than just thumbs up/down based on your raw stats.
Asking inXile’s George Ziets about abusing the rest system, I also found out certain quests in the game are timed. It’s not on a level with Fallout, with its 100-day limit. One of Torment’s goals though is to make failure as interesting as success, and so certain quests will move on with or without your assistance.
An example, trying to avoid as many spoilers as possible: At one point while exploring the Bloom, you’ll come across a group of religious protesters who demand that [A Thing] happens, and soon, or else violence may result. If you resolve the quest right away, great, you’re done. But if you put it off for a few days, Ziets tells me you might return to find the protesters have indeed taken matters into their own hands, with much more disastrous results.
It’s an interesting idea, though like all time-based missions in games I expect it’ll also be divisive. Players really seem to love the idea of dropping quests into a big ol’ bucket, returning to them at their leisure. I expect a few complaints.
Three hours with a game is a long time—even side-stepping story spoilers as inXile requested, there’s still seemingly infinite amounts of stuff I could cover. The bottom line though is that from what I’ve played, Tides of Numenera is what I’d want from a Planescape successor in 2017. It’s more approachable, more “modern,” but keeps the trademark weirdness that defined its predecessor.
We can do literally anything with video games, go anywhere, create any world we want. We’re not limited to generic sword-and-board fantasy and post-apocalypse retreads and modern war games. It’s high time an RPG recognized that fact again. Torment: Tides of Numenera might just.