Sunday, August 30, 2020

Mars dust devil! Curiosity rover places Red World twister (photos)



 On Aug. 9, 2020, NASA’s Curiosity Mars rover photographed a dust devil swirling through Gale Crater. The twister is relocating from remaining to appropriate, at border amongst the darker and lighter slopes.  (Graphic credit score: NASA/JPL-Caltech)
NASA’s Mars rover Curiosity has noticed a dust devil swirling through the parched Red Earth landscape.
Curiosity photographed the dust satan on Aug. 9, capturing a spectral feature dancing along the border amongst dark and light-weight slopes inside of Mars’ 96-mile-large (154 kilometers) Gale Crater.
TOYOTAIZATION REDUX

When the Toyota Way Meets Industry 4.0

The real message here is: "adopt and adapt technology that supports your people and processes."

Jeffrey Liker
AUG 27, 2020

Society has reached the point where one can push a button and be immediately deluged with technical and managerial information. This is all very convenient, of course, but if one is not careful there is a danger of losing the ability to think. We must remember that in the end it is the individual human being who must solve the problems.

—Eiji Toyoda, Creativity, Challenge and Courage, Toyota Motor Corporation, 1983

In the 1990s Toyota’s principles of production equipment became “simple, slim, and flexible,” which some people might interpret as “go slow and be cautious in adopting new technology.” In today’s age of lightning speed technological change, particularly in the digital world, I believe that would be a mistake. The real message here is: "adopt and adapt technology that supports your people and processes." The starting point is this: where are real needs that technology can address to help achieve your goals? This is a question of pulling technology based on the opportunity, instead of pushing the technology because it is the latest fad.

This simple lesson grows more relevant every day. It seems clear to me that technology in the digital age can support lean thinking. The key issue is to avoid the temptation to buy and implement the latest gee-whiz digital tools, and instead to thoughtfully integrate technology with highly developed people and processes.

Toyota’s largest supplier, Denso, has made remarkable progress in adapting real time data collection, the Internet of Things (IOT), and data analytics to support lean systems and amplify kaizen. At the center of Denso’s approach is people, and their ability to sense reality and think creatively. Denso demonstrates that technology has the greatest potential when there is a culture of continuous improvement and the people are highly developed.

Raja Shembekar, vice president of Denso’s North American Production Innovation Center, is the chief architect of their use of the Internet of Things (IOT). As a starting point Raja benchmarked other companies thought to be leaders in the technology. He found a lot of what he came to call “IOT wallpaper” with little real application. Cool-looking displays, but no real problem solving. He concluded that Denso needed to take charge and lead their own effort, starting with treating the Battlecreek, Michigan plant as a pilot site. He built a small team, with about half IOT experts, and half shopfloor people like quality managers who were good at software. Together, they started to work on real problems identified at the gemba.

As one example they have huge brazing ovens needed to make aluminum heat exchangers. It is critical to keep the temperature constant throughout the oven and they do this with twelve expensive fans each the size of a table. If a single fan stops, it take 12 hours to cool the oven down from 700 celsius, 12 hours to replace the fan, and 12 hours to bring it back up. With tiny sensors on each fan and an IOT system they can monitor the condition and alert maintenance when there is any degradation, long before a shutdown. In one case, Denso data scientists reported to maintenance that a fan was going to fail in 58 hours and they should replace it. Raja explained: “Maintenance did not believe it. But we asked them to change it anyway. They took the fan out. Half the blades on the fan had disintegrated. They were totally shocked that they had no idea this was happening and we could provide that prediction.”

In another example they focused on direct aid to the operator of an automated assembly process with robots. We are used to the idea of operators filling out a control chart with upper and lower control limits and taking action before the process is out of control. This system continuously develops a control chart in real-time. When observing we noticed a few minutes early that the process was headed out of control and the operator made an adjustment quickly fixing the problem.

A more ambitious project that cuts to the heart of TPS is automated standardized work support. Raja found a Stanford professor working on “motion technology.” A camera videos a person and can in real time analyze the data with AI, for example chunking actions into work steps. Raja saw the potential for revolutionizing standard work and collaborated with the professor who now has a thriving company. The output will be familiar to those experienced with standardized work—steps and times versus takt, operator balance charts, and in effect real time auditing that identifies deviations from the standardized work. The analyst does not have to spend time creating all these sheets and can call up all cases where there was a line stop, or all cases where there was a certain type of deviation from standard, and go back and watch the video. What was the operator actually doing at that point? This technology is getting broad interest from around the world—including from Toyota.

Does the Technology Deskill, Replace, or Enhance?

A key question going back to when I first started studying the impact of computer-integrated technology in the 1980s is: does the technology deskill, replace, or enhance? And the answer then and now is that it depends on management philosophy. Consider two different approaches: mechanistic and organic. From a mechanistic perspective, the value of technology is clear—replace people, monitor those remaining, and control them with clear instructions on what to do. Implement the technology quickly and broadly to remove the unpredictable human element.

From an organic-systems perspective, the value of the technology is very different. When combined with highly developed people motivated toward goals of serving customer and helping the company, it can multiply kaizen—faster and better.

Raja made it clear what side of the fence he was on. Denso’s focus was not on using the technology to eliminate people, though he had no doubt that over time there would be a need for fewer people in the factory. While there would be cases where a closed-loop technical system diagnosed and automatically corrected problems, there would be plenty of issues that required human ingenuity and intervention. In fact, Raja is convinced that the skill requirements of the people need to grow.

“We will always need people, but their skill level needs to be completely shifted over time. The technology provides data that allows the associate and the team leaders at the gemba to provide a far higher level of decision making. In the past they would just fill out the paperwork, but by the time they did all that they had either no time or energy to really comprehend the data. If they want to see trends from say five days ago or across people, that just wasn't there. What this has provided is what we now call fast PDCA. We can’t afford to have PDCA that takes 3 weeks anymore. We want a PDCA done before the end of that shift.”

At Denso in Japan, they operate on the belief that IOT does not cut people out of the loop, but rather provides superior information to people about the process. The power of big data and artificial intelligence is to give the operator information just-in-time that they previously could only guess at. But Denso expects the operator to use that information creatively to find the root cause and solve the problem through kaizen. Denso calls this “collaborative creation and growth of human, things, and equipment.” One irony might come out of this. Historically, a major role of industrial engineers was to reduce the number of workers needed. Now, the technology might enable the workers to the point they can eliminate the industrial engineers.

Balancing Adoption of the Latest Technology with Effectiveness

Toyota is a technologically advanced company and has been for decades—shut down its computer systems and you shut down the company. But Toyota is not interested in being trendy and making adoption of new technology an end onto itself. Just as Toyota refuses to schedule parts made in one department to be pushed onto another, Toyota refuses to allow an information technology or advanced manufacturing technology department to push technology onto departments that do the value-added work of designing and building cars. Any information technology must meet the acid test of supporting people and processes and prove it adds value before it is implemented broadly. And then the ownership for introducing the new technology falls on existing management. They will run it, they will be responsible for meeting the targets, so they should lead the introduction.

The problem as I see it is that people living in the computer software world seem to believe if they can do a demonstration based on a simulated example, it should translate seamlessly into solving real problems in the outside world. That is the thinking that got companies in trouble back in the 1980s. And it was the situation that Raja of Denso encountered in the 21st century when he was exploring Industry 4.0 software. I was skeptical before talking to Raja about the bold concept of a fully-automated factory with everything run by internet connections, big data, and AI--and Raja confirmed my suspicions that it could be a lot of smoke and mirrors. On the other hand, I also was awakened to the strength of the technology. I am now convinced it is real and it includes the technologies missing from early failed attempts to computerize the factory in the 1980s. It seems they were not completely wrong about the potential, but just early.

It also became clear in seeing what Raja has been doing at Denso’s plant in Battle Creek that Industry 4.0 is not a disruptive force that makes TPS irrelevant, but rather can be an enabler that builds on TPS culture and thinking. After all, the Internet of Things necessarily includes things. And if the things are poorly designed, poorly laid out, and poorly maintained software will not solve the problem.

The difference between Denso and companies that are creating electronic wallpaper seems to be a matter of mindset. Denso starts with the problem and then builds the social and technical systems to help address the problem. It builds on its existing culture of disciplined execution and problem solving. Without this, companies are left to throwing the technology at the wall and hoping it sticks. The principles of TPS will not disappear from a company like Denso, but the way the factory operates under TPS + IOT will be very different.

I was fascinated by the IOT technologies I saw at Denso, but in the back of my mind I could not help but guess at the concerns of Toyota Production System. TPS is about forcing people to think deeply to solve problems. Will computer systems make us lazy thinkers? How can we marry the powerful information coming out of the computers with the creativity of people in developing and testing ideas for improvement?

I was encouraged by Akio Toyoda’s thoughts. It is clear he sees the possibility of combining the best of the new technology with the creativity of thinking people In a recent speech he said:

“Two concepts -- automation with people and Just-in-Time -- are the pillars of the TPS. What both have in common is that people are at the center. I believe that the more automation advances, the more the ability of the people using it will be put to the test. Machines cannot improve unless people do, too. Developing people with skills that can equal machines and senses that surpass sensors is a fundamental part of Toyota's approach.”

This article is from the upcoming revised edition of The Toyota Way, Second Edition (due out October 2020), which includes Principle 8: Adopt and Adapt Technology that Supports your People and Processes. It originally appeared in the Lean Post, the blog of the Lean Enterprise Institute, and is used with permission.


Don't Let Complexity Disrupt Your Product Delivery Process

New survey shows growing importance of digital transformation as manufacturers pivot to meet needs within digital economy.

Peter Fretty
AUG 28, 2020


As the digital economy progresses, manufacturers are embracing the importance of leveraging digital data-based technologies. One of the key aspects of succeeding within a digital economy is the ability to rapidly design, develop, produce and deliver the products an evolving marketplace demands. Of course, just because the digital economy is all about delivering an easy seamless experience, that does not mean it is easy to delivery, even with today’s advanced technologies.

Gocious, a product decision analytics platform that harnesses data for manufacturers to get to production faster, recently conducted a survey spotlighting the growing urgency for digitally transformed product configuration. And the result paint a pretty clear picture. Roughly 60% of manufacturers surveyed report 6 months or longer to plan and develop one product, and 62% have initiatives in place to reduce the product launch cycle time. However, less than 10% of manufacturers are using product definition tools to help automate, visualize or analyze product configurations.

Unfortunately, complexity is a common concern. Specifically, half of the survey respondents say they have 10 or more products in their product line with a growing number of product variations adding complexity to product release cycles. Adding to the complexity, on average 43 people are involved in production definition process with an average of 33 people needed for product approvals.

Gocious CTO Maziar Adl tells IndustryWeek, they were surprised by the low number of organizations adopting new ways to reduce their time to market during a moment when the speed of change is rapidly increasing due to the Industry 4.0 evolution.

“It was also interesting to see just how many people are involved in approving a product release. Based on both of these instances, we can see more clearly how more collaboration is necessary even early-on in product development,” he says. “Additionally, we were surprised to see the number of people using tools that are more specifically tailored to delivering software solutions, compared to manufactured products. To me this indicates that people want better solutions than the traditional spreadsheet approach and are prepared to compromise on capability to get what they want.”

The key takeaways? Digital transformation is happening, and some manufacturers are ill prepared to face the challenges of being left behind, explains Adl. “Many are too comfortable and need to look more closely at the massive need for acceleration within their internal digital transformation journeys. Manufacturers need to find ways to become more agile in delivering product. Things are shifting much more quickly than before,” he says.

According to Adl, prior to releasing ideas for build and detail design, manufacturers should first look at early stages of product line definition and planning to reduce waste and complexity. “If a manufacturer is not in control of their product complexity, then any initiatives that may be considered to shorten delivery time cycles will not have as much of an impact as they could. A lack of complexity management can ultimately create waste,” he says.
THE ART OF CRAFTWORK

The Unnecessary Crisis in the American Workforce

Somehow the notion that a four-year college is for everyone has entered the national zeitgeist, but it’s just not true.


Ken Rusk
AUG 28, 2020

For several decades, the supply of skilled blue-collar workers has been shrinking, while demand rises. According to the U.S. Department of Labor, as of July 2017, a record 6.8 million jobs that require skilled laborers were left unfilled. Many of them are manufacturing jobs. The National Association of Manufacturers reports that a skills gap has caused about a half-million manufacturing jobs to remain open, and consulting company Deloitte predicts that by the end of this decade, as many as 2.4 million manufacturing jobs may go unfilled, putting $454 billion in production at risk.

While these numbers were reported before the coronavirus hit and unemployment has jumped to record levels in virtually all industries, scientists and economists agree that our economy will recover. When it does, for a successful reboot, we’re going to need skilled blue-collar and manufacturing workers more than ever.

Skilled workers learn and improve upon their abilities over time, and use their hands for something more than just fingers on a keyboard. The description applies to independent contractors – carpenters, plumbers and electricians – but also to those on the factory floor – welders, machinists, machine operators, CAD draftspeople, and mechanical engineers.

There are numerous explanations for the shortage of workers in this segment of the economy. It starts with social pressures that encourage younger generations to avoid skilled trades in the first place. It’s the perpetuation of the ludicrous idea that the only path toward financial success includes a four-year college degree. That thinking ignores the fact that there are millions of young people who don’t have the inclination to spend their working life at a desk looking at a computer screen, or to emerge from schooling with staggering debt.

Somehow the notion that a four-year college is for everyone has entered the national zeitgeist, but it’s just not true. Having never graduated college myself and enjoying the American dream through a blue-collar life, and knowing many people in the same boat (some in yachts, actually), there is definitely another way – a way that doesn't include staggering debt. Even before the pandemic, $1.5 trillion was owed by more than 44 million borrowers, and nearly 40% thought it would take more than a decade to pay off their student debt. The pandemic has obviously exacerbated this situation further.


The momentum against working with one’s hands begins early in a child’s life. Just look around your neighborhood and you’ll see one explanation right before your eyes. Instead of children playing outside, they’re inside using only thumbs on a game console. Remember when every back yard had a treehouse, built by the neighborhood kids, or a fort with sticks and stones? They had to actually pick up a tool and learn to use it.

But we can’t blame the Nintendos and the Xboxes of the world for this crisis in the American workforce. A more immediate concern is that many skilled tradespeople are aging out of the labor force. Estimates indicate that for every skilled person entering the workforce, there are five who retire. Many skilled trade companies are family-owned, some passed on from an earlier generation. But most of today’s youth aren’t nearly as interested in a blue-collar life that would keep that company in business, or even lead them toward a skilled manufacturing job.

One way blue-collar industries are trying to mitigate shortages is by recruiting women, whether it’s reaching out to high schools, Sunday schools, or the Girl Scouts. More and more employers are offering flexible hours so moms can be with their children at school drop-off and pickup times, as well as twelve-hour Saturday and Sunday shifts that allow one parent to work while the other takes care of the children. Fortunately, technology has, in many cases, made brains more important than brawn. In Virginia, Barbara Gaskins is the lone woman in her 17-person rotation that operates a CRMG cantilever, a 70-foot-tall machine that reaches over four sets of railroad tracks to load and unload 500 boxes a day, some weighing 30 tons. Gaskins does it all in an office, controlling the action with two joysticks and 30 buttons.

The current pandemic will only increase the need for skilled workers. Ramping up production of personal protective equipment, but also more sophisticated devices like ventilators and COVID-19 virus and antibody tests, is importnat. And once a vaccine is ready, production of the literally billions of doses that will be needed is going to necessitate the labor of skilled people who know what they’re doing.

The most promising solution is for local employees to target young people by sponsoring apprenticeships and encouraging high schools to teach practical skills. Remember when every high school offered shop classes where students learned how to hammer a nail, weld a pipe, and fix a car? For many students, this was their first exposure to the kind of hands-on experience that ignited a blue-collar or manufacturing career. A 2019 report by the research arm of the Commercial Real Estate Development Association acknowledges that a declining public-school focus on vocational education has exacerbated a shortage of entry-level workers, and suggests that businesses must do a better job of investing in the training and recruiting of high school students and recent graduates. And according to a comprehensive report published by consulting firm Bain & Company, “If we want everyone’s kid to succeed, we need to bring vocational education back to the core of high school learning.”

Apprenticeships, on-the-job training, and vocational programs at community colleges are all ways young people can be taught blue-collar and manufacturing skills. These are also paths that lead to a successful life. According to the U.S. Census Bureau, only a third of adults 25 and older have a bachelor's degree or higher, and you can’t tell me that the other two-thirds are living in poverty. In fact, I can promise you they are not.

Savvy businesses are demonstrating to prospective employees that skilled entry level jobs in manufacturing, for example, can be the start of a long-term, lucrative career. These efforts can reinvigorate the American dream for a new generation of young people, while helping businesses access the talent and capabilities they need.

At a community or technical college, young people who have already found their passion, whether it’s fixing or building things or soldering two pieces of metal together, can refine their skills. That allows them to do what they love, rather than take courses that don’t interest them and will be forgotten minutes after the final exam. Amazon recently announced it will be spending $700 million to retrain about a third of its workforce in an effort to improve the technical expertise of its entry level coders and data technicians. PayPal founder Peter Thiel has established a fellowship which offers $100,000 apiece to 20 young people each year to skip college and pursue a business idea while being mentored by the Foundation’s network of founders, investors, and scientists. And Apple CEO Tim Cook recently spoke of a “mismatch” between the skills people are acquiring in college and the ones demanded by modern businesses. He noted that about half of Apple’s new hires in 2018 did not have a four-year degree. The current economic turmoil brought on by the pandemic will end, but the growing need for skilled workers will not.

Fortunately, this crisis in the American workforce is also an opportunity for millions of Americans. Does a young person really need a bachelor’s degree to become a web designer, carpenter, welder, or any of a dozen other lucrative professions? The internet makes researching alternatives easier than ever. You want to become a a CNC (Computer Numerically Controlled) operator? Many CNC operators are trained on the job or as an apprentice, but certificate programs are also offered through vocational schools, community colleges, and commercial trade schools.

There’s a general sentiment that blue-collar workers or manufacturing workers struggle from paycheck to paycheck and are unable to reach financial security for themselves and their families. I can tell you firsthand, this is absolutely untrue. A blue-collar or skilled manufacturing career provides the opportunity to work hard, make a good living, and if you want, maybe even the opportunity to be your own boss. Any blue-collar passion can turn into a business, and in today’s world, the costs of opening your own have never been lower. It’s going to take a long time (if ever) before the supply of carpenters or plumbers or stonemasons is greater than the demand. Law degrees and business majors may be a dime a dozen, but those who know how to operate a saw or a jackhammer, use an excavator, or repair a liquid filling machine are not. According to last year’s Harris poll of blue-collar workers, the vast majority—86%—are happy with their jobs, and 85% believe their lives are headed in the right direction.

For centuries, great men and women have built America from the ground up, mostly with their bare hands. From those early settlers to today’s modern worker, one thing remains consistent -- the ability to put down any tool we used, wipe the sweat from our brow, and to revel in what we were called on to create. The demand for skilled laborers is not a passing trend. For every toddler with a smartphone, there is going to be a need for people who know how to do things, build things, to design the most efficient procedures. Smartphones can do a lot, but they can’t oversee manufacturing, motivate a crew, or perform assembly or critical repairs. It’s time to celebrate the blue-collar worker!

Ken Rusk is a blue-collar entrepreneur who runs Rusk Industries in Northwest Ohio. He is the author of the book Blue Collar Cash: Love Your Work, Secure Your Future, and Find Happiness for Life (Dey Street Books, July 28, 2020). See him at KenRusk.com KenRuskofficial on FB and Instagram.
Explainer: The Why and How of Disposing Electronic Waste

29/08/2020
Featured image: A scrap dealer piles up discarded TV sets before dismantling them at a scrap yard in Ahmedabad, India, July 2, 2020. Photo: Reuters/Amit Dave
SEJAL MEHTA

What do you do with your e-waste? The answers would possibly range across a wide spectrum – from ‘what is e-waste’, ‘office IT vendor’ and ‘collection boxes’ to ‘we just dump it in the dustbin’ or ‘hoard it in a cupboard.’ It would appear that disposing of e-waste effectively (or at all) is not a priority because, unlike our natural waste, it doesn’t really get in the way.

How much e-waste are we generating and why should we worry about it?

Simple answer: because we’re quickly reaching up to the brim with it.

According to a 2019 United Nations report, titled ‘A New Circular Vision For Electronics, Time for a Global Reboot’ consumers discard 44 million tonnes worth of electronics each year; only 20% is recycled sustainably.

The Global E-Waste Monitor 2020 shows that consumers discarded 53.6 million tonnes worth of electronics in 2019 globally, up 20% in 5 years.

India generated 3.2 million tonnes of e-waste last year, ranking third after China (10.1 million tonnes) and the US (6.9 million tonnes). Following the current growth rate of e-waste, an ASSOCHAM-EY joint report, titled ‘Electronic Waste Management in India’ estimated India to generate 5 million tonnes by 2021. The study also identified computer equipment and mobile phones as the principal waste generators in India.

With COVID-19 keeping people indoors, the usage is only getting higher; and without proper intervention, it is likely to be over 100 million tonnes by 2050.

Also read: Photo Story: How E-Waste Workers in Delhi Jeopardise Their Health to Earn a Living

What happens if we don’t recycle?

Two things – from dumpsters, it either goes to landfills or travels down in unregulated markets.

Ashley Delaney is Founder at Group TenPlus, a Goa company that manages the collection of electronic waste. “An ordinary circuit board from a mobile or laptop contains roughly 16 different metals,” says Delaney. “Most informal sectors will probably be able to retrieve a couple of metals and landfill the rest. Hazardous chemicals like mercury, which are used to extract these metals, leach into the soil, which will be damaged forever. If you find discarded batteries, tube lights, CFL bulbs, chances are the soil around them will be barren. Simply put – composting sites have fungus growing around it, despite being a ‘waste space’. But look around a dumpster, e-waste will ensure that nothing natural will grow around it, not even grass.”

Once the quantities increase, the leaching of metal finds its way to everything around that space, even food. When e-waste travels to our oceans in large quantities, it contaminates water with gaseous or liquid toxins, which we can’t even see. A study led by SRM University, Tamil Nadu, found that soil from informal electronic recycling sites that recover metals showed high levels of contamination across Mumbai, Delhi, Kolkata and Chennai.

Why should we recycle e-waste?

The point of extracting metals and plastic from e-waste is to use them towards making more electronics. This is not as easy as it seems. These metals are difficult to extract – the UN report puts the total recovery rates for cobalt at 30% (despite technology existing that could recycle 95%). It’s used for laptops, smartphones, and electric car batteries, and recycled metals are two to 10 times more energy-efficient than metals smelted from virgin ore. The way forward to ensuring a sustainable chain in manufacturing and recycling is to build effective reuse methods.

This is also vital because the key elements in most electronics – rare earth metals – aren’t exactly rare as their name suggests, but are definitely hard to obtain, at least locally. The latest forecasts show that e-waste’s global worth is around $62.5 billion annually, which is more than the GDP of most countries. It’s also worth three times the output of all the world’s silver mines.

Also read: ‘Unsustainable’: Global E-Waste Monitor Report Cites India’s Problem Among Others

Is my local kabadiwala (scrap dealer) a good option?

Short answer: no.

When you give your e-waste to an unauthorised waste-collector, you’re contributing to the chain of unregulated markets, which accounts for handling over 95% of e-waste generated in India. These markets attempt to extract metals from devices to sell them onward, but possibly with fewer skills per metal and the necessary safety standards.

“There are thousands of informal dismantling and recycling units – Dharavi in Mumbai, Meerut, Moradabad, Seelampur in Delhi, and many more,” says Pranshu Singhal, Founder, Karo Sambhav. “These spaces engage in open-air burning of wires to extract copper, use cyanide-based acid to extract metals – at great harm to themselves and the environment around them.”

Once they extract copper from a product –it finds its way back into the secondary market, whatever part of the world it might end up. The challenge primarily is the practices that are deployed.

A 2018 documentary Welcome to Sodom explores the almost dystopian, shocking world of the Agbogbloshie dump in Ghana, where life revolves around toxic waste, versus a hope of a healthier life. The site says, “Every year about 2,50,000 tons of sorted out computers, smartphones, air conditions tanks and other devices from a far away electrified and digitalised world end up here, shipped to Ghana illegally.”

Reports show that e-waste workers suffer from stress, headaches, shortness of breath, chest pain, weakness, and dizziness and even DNA damage. There is a body of research, the report cites, that shows “a significant risk of harm, especially to children who are still growing and developing. Individual chemicals in e-waste such as lead, mercury, cadmium, chromium, PCBs, PBDEs, and PAHs are known to have serious impacts on nearly every organ system.”

Dharavi is one of the top hubs in India for the informal recycling of e-waste. Studies have shown that even the water there is acidic and the fumes are causing health problems. As Delaney says, “Don’t go to a kabadiwala – you’re handing him a knife to either kill himself or someone else with it.”  
India generates about 3 million tonnes (MT) of e-waste annually and ranks third among e-waste producing countries, after China and the US. 
Photo: ITU/R.Zaveri/Flickr.PIN IT

What are India’s laws to manage e-waste?

India is the only country in Southern Asia with e-waste legislation, with laws to manage e-waste in place since 2011, mandating that only authorised dismantlers and recyclers collect e-waste. There are now 312 authorised recyclers in the country.

The E-waste (Management) Rules, 2016 (effective from October 2016) mandated collection targets and transferred responsibilities to the producers – Extended Producer Responsibility (EPR). This put the onus on the brands to ensure that waste was brought back in. These targets were relaxed in 2018.

Also read: Basel Convention’s Plastic Ban Amendment a New Step Against Waste Colonialism

Karo Sambhav’s Singhal understood the importance of early-stage success after the regulations were passed. The e-waste movement had begun finally in India and without quick effects, it would lose momentum. Having worked in the sustainability space before – at Nokia and with Thomas Lindquist (who coined the EPR concept), Singhal launched Karo Sambhav.

“We work with waste collectors and aggregators and help them get formalised – ensure everyone has pan cards, bank accounts and give invoices, and ensure that waste is traceable,” he said. This was also the time of demonetisation, GST – policies that pushed unregulated extractors to align themselves to a collection centre. As far as businesses were concerned, data sets, transaction records allowed transparency and a trail for the trajectory of e-waste.

Why don’t we see more outreach about recycling our waste?

While our conversations around sewage and garbage segregation are targeted and goal-oriented, the quiet crisis literally taking up 70% of our landfills gets very little talk time, especially from the brands themselves.

‘The idea behind the waste management rules was not just to ensure waste is collected and recycled responsibly but also that manufacturers start to include sustainable methods,” says Priti Mahesh, Chief Programme Officer at Toxics Link.

“Right now, the manufacturing chain is scattered – parts for an item come from one country, the battery from another, the assembly happens in a third. So even collection and extraction are a bit ambiguous and so is the financial cost. This system is flawed. The deposit refund scheme (where there is some refund on the return of a product) is available, but not mandated. In a price-sensitive market, with no penalties attached, a brand is unlikely to make a product more expensive to factor this cost in when a competitor won’t,” she added.

In the current setting, she says, it requires extra effort, costs and infrastructure for not much in return, so most brands are not ready to take financial responsibility. That leads to a dangerous loop. Consumers are met with enough advertising the world over that urges them to buy more, but how many brands are using ad space to remind you to be mindful of this global crisis?

According to Delaney, some brands avail of the deposit refund scheme, but don’t advertise it. “Say you return your car battery to the company and are delighted when they offer you Rs. 400 discount in exchange for a new one. What they’re not telling you is they’re refunding what is due to you; it is not a discount – but Indian jugaad.”

The solution lies in creating a circular economy of electronics, says a report from the World Economic Forum. The products need to be designed so that they can be reused, durable, and safe for recycling. The producers should also have buy-back or return offers for old equipment and plans to incentivise the consumer financially. The report also advocates a system of ‘urban mining’ by strengthening the extended producer responsibility provision.

What can a consumer do? The 4 R’s

Reuse: Use your gadgets for longer. The upgrade to a new electronic item should ideally happen for necessity, not style. If you’re okay to use second-hand electronics, do so.

Repair: Ensure repair policies exist. Ask for them.

Recycle:

Talk to the brand: The best and most effective longer-term situation, which might require some persistence on your part, is talking to the brand. The requests to some established brands for comments on this story were met with either silence or a refusal to comment. But if enough consumers ask for what practices are in place, it will become integrated in the way a brand communicates with us – through retail and advertising. Even if you buy at a mall, a chain, or a small retail store, ask what is the return/ recycle policy. If you don’t understand the answers, call the brand.

Most brands have collection details on their websites. Use them. Think beyond phones and laptops, be mindful of all electronics – batteries (car, gadget both), speakers, tubelights, it’s easy to throw these in the trash. Don’t.

Collection boxes: Even if you’re using collection boxes that brands have set up in the vicinity, call and ask what is happening to the waste in the bins.

Also read: How Are India’s Plastic Waste Imports Increasing?

Research: “I’d add one more R here – research,” says Suchismita Pai, Head of Outreach at Swach, Pune. “Almost everything is biodegradable in its own time. What you need to look for is something that is bio-compostable within a reasonable span of time. Check on the box of a new product for e-waste instructions. It’s always there, read it. Every manufacturer has a toll free number. Use it.”

Registered collection organisations: If you’re using registered organisations in your city, ask them about their methods, recyclers, and where the is waste going. A simple google search will yield results in your city, you can start with the ones in this story (Group TenPlus, Karo Sambhav, Swach, Toxic Links — and ask them for alternatives in your cities. Because, not every ISI marked product is authentic. A good brand will always have transparency.

But ultimately, brands will have to accept the onus of supporting customers through this. “Organisations like ours do collection drives, outreach campaigns, but ultimately our reach is limited,” says Singhal. “India is a country of billions. The brands that sell you the product have the largest presence. E-waste will need to be integrated into how brands communicate constantly with us in the future. India generated 3 million tonnes already and it will only rise exponentially.”

Singhal accepts that while we have already come a long way, these are early stages. India is far from establishing strong structures and maturity of processes in the business structure and that the need for high investment in recycling infrastructure is paramount. In the meantime, consumers can give e-waste as much attention as they would to their daily garbage. A mind-set shift might be the start of a circular vision.

This article was originally published on Mongabay.


How Mangroves on Car Nicobar Fought Back Sea-Level Rise After the 2004 Tsunami

29/08/2020

Featured image: Rhizophora mangroves. Photo: J. B. Friday/Flickr, CC BY 2.0

In the 2004 Sumatra-Andaman earthquake-tsunami, as land sank and the sea suddenly rose at Car Nicobar Island, mangroves facing the land were unable to survive. But the abrupt disturbance did not affect the sea-facing mangroves dominated by the Rhizophora spp., a study has said.

Seaward mangroves dominated by Rhizophora spp. could fight back the prolonged flooding and the pounding of the waves. But the landward mangroves comprising the Bruguiera spp., Lumnitzera spp., Sonneratia spp. could only take so much – they were unable to survive the sudden one-meter land subsidence at Kimios Bay in Car Nicobar Island, a part of the Nicobar Islands, during the major seismic event.

“The abrupt sea-level rise (SLR) in the Andaman and Nicobar Islands due to the sinking of the earth’s crust by 1.1 metre provided insights on species-level responses of mangroves to SLR,” said study author Nehru Prabakaran of Wildlife Institute of India. He suspects the resilience of Rhizophora spp. is probably due to the frequent geologic events in the Nicobar Islands and their adaptability to thrive in habitats that experience a long duration of flooding by seawater.

“There is a varying degree of resilience among species. For example, each mangrove species have different levels of resilience (high or low) to sea-level rise, mainly due to their morphological (structural) adaptations to thrive in different depths of tidal flooding,” said Prabakaran, stressing that the inter-specific resilience among mangrove species to SLR is a key to design conservation strategies for this economically important ecosystem that is among the most vulnerable to SLR.

The 2004 Sumatra-Andaman earthquake followed by the catastrophic tsunami gobbled up landmass. It stripped the coast off trees in the Nicobar Islands in the Indian Ocean – the landmass closest to the earthquake epicentre. Mangroves, which flourish where land and water meet, bore the brunt of the natural disaster. According to a 2018 study by Prabakaran, the event destroyed as much as 97% of mangrove cover on Nicobar Islands.

Also read: How Mangroves Protect People From Increasingly Powerful Storms

Among the sites with surviving mangroves, Kimios Bay in Car Nicobar Island was the only patch with more than 80 hectares of mangrove area that survived despite the 1.1 metre of land collapse. This 80-hectare patch is the unaffected Rhizophora vegetation. The affected spots dominated by Bruguiera gymnorhiza and Lumnitzera racemosa span roughly 100 ha.

Expanding on the ability of Rhizophora species to adjust to sea-level rise, Prabakaran said, across the world, these plants predominantly grow in the seaward side, mainly due to their pneumatophores/stilt roots that are specialised aerial breathing roots.

In Rhizophora spp, roots diverge from stems and branches and penetrate the soil some distance away from the main stem as in the banyan trees. “These roots (in Rhizophora spp.) strengthen the plants to withstand high wind speeds (frequent in the seaward zone) and longer hours of tidal water flooding. On the other hand, the Bruguiera spp. have knee roots, which means the roots go into the soil for a distance and then come out,” said Prabakaran.

Knee roots are horizontal roots growing just below the soil surface periodically grow vertically upwards then immediately loop downwards to resemble a bent knee. By repetition, a single horizontal root develops a series of knees at regular intervals. The aerial portions (knees) of these roots help in aeration of the whole root which because it spreads so widely, improves anchorage in the unstable mud, according to information about mangroves on a National University of Singapore webpage.

According to the India State of Forest Report (ISFR) 2019, about 40% of the world’s mangrove cover is found in southeast Asia and South Asia. India has about 3% of the total mangrove cover in south Asia. The current assessment shows that mangrove cover in the country is 4,975 sq km, which is 0.15% of the country’s total geographical area. Mangrove cover in the country has increased by 54 sq km (1.10%) compared to the previous assessment. West Bengal has 42.45% of India’s mangrove cover, followed by Gujarat with 23.66% and Andaman and Nicobar Islands with 12.39% cover.

Mangroves have a complex root system that efficiently dissipates seawave energy protecting the coastal areas from tsunamis, storm surge, and soil erosion. Their protective role has been widely recognised, especially, after the 2004 tsunami.

Keeping pace with sea-level rise

Mangroves don’t go down without a fight. They can keep pace with sea-level rise and avoid flooding by trapping sediments vertically, which allows them to maintain soil levels suitable for plant growth. “The faster root growth of Rhizophora spp. also allows them to quickly trap sediment and build soil to match up with the global sea-level rise,” said Prabakaran.

Also read: Amphan in the Sundarbans: How Mangroves Protect the Coast From Tropical Storms

But a 2015 study on mangroves in the Indo-Pacific region suggests that mangrove forests at sites with low tidal range and low sediment supply could be submerged as early as 2070. In 69% of study sites, the rate at which the mangroves accrete sediment is not fast enough to match the current rate of sea-level rise, the paper said.

A recent Intergovernmental Panel on Climate Change special report cautioned that current ecosystem services from the ocean are expected to be reduced at 1.5 degrees Celsius of global warming, with losses being even greater at two degrees Celsius of global warming.

Agreeing with the findings at Kimios Bay, coastal systems research scientist R. Ramasubramanian at M. S. Swaminathan Research Foundation said a one-meter rise in sea level resulting in sudden submergence usually would not be a threat to Rhizophora plants. These plants grow 6-10 metres high in the Andaman and Nicobar Islands and have a large number of roots above the submergence level.

“The mangroves species such as Avicennia spp. and Bruguiera spp. can also withstand the normal sea-level rise of about a few millimeters a year (around 3 mm/year). This is because the root modifications (pneumatophores and knee roots) will also gradually grow along with the sea level and will be able to withstand the rise in sea level,” said Ramasubramanian.“But the 2004 event was a rare one where the SLR rise happened suddenly, and the submergence for a long period killed the mangroves that have shorter pneumatophores.”

The frontline plants that face the sea, such as Rhizophora spp., take the worst hit from tsunamis. “As the waves move towards the mangrove vegetation, their energy will be considerably reduced from sea to land. The seaward mangrove zone experiences the high wave energy, and the landward zone experiences the least. Therefore, the physical uprooting of trees is high in the frontline seaward mangrove zone. In contrast, the tree death due to subsidence/submergence related sea-level rise was high in the landward mangrove zone,” explained Prabakaran
.
Rhizophora mangroves. Photo: J. B. Friday/Flickr, CC BY 2.0PIN IT

Legacy effects may have also shaped the resilience of Rhizophora assemblage. In the last two centuries, the Andaman and Nicobar Islands have experienced four strong earthquakes in 1847, 1881, 1941, and 2004 due to tectonic movements. The mangroves in these islands have perhaps undergone numerous sudden sea-level changes, the paper states.

“Records of mangrove and coastal vegetation being drowned due to subsidence are available across the Andaman and Nicobar Islands dating back to 1868 (Kurz 1868; Oldam 1884; Tipper 1911). Particularly noteworthy was the 1881 earthquake, which caused uplift and subsidence in Car Nicobar (Ortiz and Bilham 2003),” the study said.

The survival threshold of Rhizophora spp. appears to be between 1.1 m (as recorded in Car Nicobar) and 1.35 m of abrupt subsidence. However, further studies focusing on microcosm experiments to understand Rhizophora spp. resilience to rapid SLR at the study site is required to strengthen these observations, Prabakaran said.

“The Bruguiera spp. is coming back slowly. It is coming up in areas where earlier there were coconut plantations. These areas have become intertidal where the tidal water comes and goes,” said Prabakaran.

Also read: Bullet Train Project to Cost Maharashtra 54,000 Mangrove Trees

The United Nations Environmental Programme emphasises that mangrove forests are among the most powerful nature-based solutions to climate change. But with 67% of mangroves lost or degraded to date, and an additional 1.0% being lost each year, they are at risk of being eradicated. Without mangroves, 39% more people would be flooded annually and flood damage would increase by more than 16% and USD 82 billion. They protect shorelines from eroding and shield communities from floods, hurricanes, and storms, a more important service than ever as sea levels continue to rise. The UNEP recently came out with ‘Guidelines on Mangrove Ecosystem Restoration for the Western Indian Ocean Region’ to analyse risks and challenges to restoration projects and point to potential solutions.

Prabakaran emphasised that the choice of mangrove species matters for mangrove restoration projects. “We cannot confirm that Rhizophora species across the globe would show similar adaptability to a sudden increase in sea level, like in Car Nicobar. But a number of other researchers have confirmed that Rhizophora species across the globe are certainly performing better against the global sea-level rise (which is a gradual process unlike the sudden increase in Car Nicobar) compared to many other mangrove species,” he said.

“If the objective of any mangrove conservation and restoration projects is focused on combating sea level rise, then Rhizophora species would be comparatively the better species of choice for the frontline seaward mangrove zones, while other mangrove species should also be planted based on their habitat preferences (eg. seaward zone or landward zone),” Prabakaran added.

Most mangrove ecosystems have the species diversity to tolerate a wide range of salinity and submerge tolerance observed Ramasubramanian.

“Succession will take place naturally based on environmental conditions. Even in the Indian part of the Sunderbans, the freshwater-loving species Heritiera fomes are slowly disappearing, and other saltwater tolerant species are gradually occupying the area. The Sunderbans mangroves are resilient, and they will recover on their own. In Muthupet in Tamil Nadu, the recent cyclone Gaja had a huge impact. Large areas of mangroves were lost. Slowly it is recovering on its own,” added Ramasubramanian.

This article was originally published on Mongabay.

Remembering the Work of G.N. Ramachandran and Others in the Time of COVID-19

29/08/2020

G.N. Ramachandran. Photo: Current Science, vol. 80, no. 8, April 2001/fair use.

The whole world is reeling under the impact of the COVID-19 pandemic. The safest way out depends on manufacturing an effective vaccine. Many vaccine candidates are currently in different stages of clinical trials. A quick scan of the list suggests over half the candidates are attempting to use the novel coronavirus’s spike protein to generate immunity, so getting its structure right is critical.

G.N. Ramachandran’s work from the 1960s is notable in this regard. I was first introduced to his influence on structural biology as a graduate student and have always wished he were better known outside this niche field. His work has guided the design of protein-based vaccines and therapeutics touching everyday life. However, his isn’t a household name even in India.

Ramachandran began his academic career as a student of electrical engineering at the Indian Institute of Science, Bengaluru. He was soon pulled into the physics department by C.V. Raman, who said, “I am admitting Ramachandran into my department as he is a bit too bright to be in yours.”

Under Raman, Ramachandran began research into the field of optics and X-ray topography. After graduation, he earned a PhD at the University of Cambridge, in the laboratory of Lawrence Bragg. He focused on X-ray crystallography, a technique used to analyse the structures of proteins.

Around the same time, the American physicist Linus Pauling was conducting groundbreaking work on the nature and structure of proteins. Pauling’s teaching and writings had a great influence on Ramachandran. So when the latter became a professor of physics at the University of Madras in 1952, he set up an X-ray crystallography laboratory to study biological structures.

Proteins are made of long-chain polymers called polypeptides. There can be one or multiple polypeptide chains that come together to form a protein. The novel coronavirus’s spike protein comprises three identical peptide chains. Each chain consists of repeating units called amino acids. The chain folds into intricate shapes or motifs, such as helices and sheets. These shapes are significant; misfolded proteins often give rise to debilitating diseases
.

Spike glycoprotein from SARS-CoV-2. Image: 5-HT2AR/Wikimedia CommonsPIN IT

The long orange region in the spike protein is the alpha helix and the pink region is a beta sheet. Mixing and matching of motifs form different protein structures required to ensure the protein performs its function. In this case, the spike protein’s overall structure helps to bind with a protein on the surface of cells in the human nose and mouth.

The order of amino acids within a peptide chain dictates how it will fold. Amino acids take their name from the nitrogen-containing molecule (amino) on one end and the carbonyl group (acid) on the other, with a connecting carbon atom in the middle (Cα). The differences between amino acids come from a third part, called the side chain (R), which attaches to the central carbon atom
.

The structure of an un-ionised amino acid. Image: Techguy78/Wikimedia Commons, CC BY-SA 4.0PIN IT

In the mid-20th century, the idea of firing beams of X-rays at molecules to elucidate their structure based on how they diffracted the beams was new. In 1951, Pauling, Robert Corey and Herman Branson published their descriptions of the alpha helix and beta sheet motifs. Ramachandran wanted to continue this work, and chose to study the structure of collagen first.

Collagen is protein found abundantly in the human body: it makes up the bulk of our skin, cartilage and connective tissues. Together with his postdoctoral fellow, Gopinath Kartha, Ramachandran proposed a triple helix structure for collagen – also called the Madras helix. However, Alexander Rich and Francis Crick contested this idea because they thought this structure allowed little space between atoms than was compatible with the prevailing understanding of chemistry.

This set Ramachandran and his colleagues C. Ramakrishnan and V. Sasisekharan on the path to further understand and describe the structures of polypeptide chains. They conducted a survey of the crystal structures available to determine how close two atoms could approach, and thus deduce the permissible interatomic distances within a polypeptide chain.

They also found that in a chain, each amino acid can only rotate around two bonds. Based on this they characterised two angles – phi (φ), the angle between the nitrogen atom and the central carbon atom, and psi (ψ), the angle between the carbonyl carbon atom and the central carbon. The figure below shows the two angles
.
A protein backbone showing the phi and psi angles. 
Image: Dcrjsr/Wikimedia Commons, CC BY 3.0  PIN IT

Now, they could plot all possible combinations of these angles within a given polypeptide sequence against each other, and eliminate any combination that violated the interatomic contact limits. And voila! They managed to reduce a messy biological problem to simple considerations in elementary mathematics.

The resulting Ramachandran plot changed how biochemists studied molecules of interest and unraveled complicated biological processes. It’s a plot with phi on one axis and psi on the other. At a glance, the graph revealed islands of possible combinations separated by seas of impossible structures. If a molecule’s angles pulled it into the sea, biochemists would know it couldn’t exist in the body and follow the laws of physical chemistry at the same time
.
Ramachandran plots for two amino acids, proline (left) and glycine (right). Images: Dcrjsr/Wikimedia Commons, CC BY 3.0PIN IT

More broadly, biochemists today can quickly understand which structures are possible and which aren’t, and compare known and unknown structures in an intuitive manner. The Ramachandran plot has also become an elegant way to introduce students to protein structural biology and help them understand how structure and function are related to each other.

When Ramachandran died in 2001, at the age of 78, the tributes flooded in. One in particular, by Janet Thornton, published in an obituary by Easwara Subramanian, went thus:

“I have never met Professor Ramachandran, but his contribution … ranks with Pauling’s discovery of the α-helix. It never fails to excite me, when I see the Ramachandran plot and realise how much of the beauty and order of protein structures is encapsulated by this plot. I also think that this major discovery highlights the importance of clear thought and vision that do not always need expensive equipment and huge teams of people.”

The plot’s remarkable longevity is unsurprising given its endless significance in protein structural biology. It is vital that researchers independently validate different biological structures so that others who use those biological objects, like proteins, to develop drugs and vaccines can be sure they know the structure well enough. There exist a variety of algorithms that can check a structure’s accuracy using updated Ramachandran plots.

When they need to check the structure of an unknown protein, researchers compare data obtained from crystallographic studies and from theoretical calculations. The more the plots match, the more confident scientists can be that their structure is the right one.

Ramachandran’s life is a testament to the power of curiosity, intellect and determination. Science education can be biased in favour of assigning credit to individuals, often white men of the West, over groups. In fact, science is an inherently collaborative endeavour, with each generation building on the work of previous ones. So just as we celebrate Ramachandran’s life and work, let’s also celebrate the lives and work of his students and collaborators – who together helped advance the science of protein structural biology such that it has risen to the occasion in our present crisis as well.

Deepika Calidas is a biochemist. Her last position was as a postdoctoral fellow at the Johns Hopkins School of Medicine.
A Black Hole Paradox Where Relativity and Quantum Physics Meet
29/08/2020

A simulated view of a black hole from the 2014 film ‘Interstellar’. Image: YouTube.

Of all the unsolved problems in physics, the black hole information paradox feels most like a whodunit. It’s a classic locked room mystery, and the victim in this case is quantum information.

Imagine a man goes inside a room and gets locked in. The police arrive at the scene and go in to find no trace of the man. Instead there are some bricks, a coffee machine and a dictionary (or any other set of random items) whose combined weight equals that of the man. Mass has been conserved – but without any trace of the man himself, no one can figure out his identity or how he vanished. This happens every time someone goes into one of these rooms: they disappear leaving behind a random set of items. The identity of the person is simply lost.

The black hole information loss paradox is a mystery along similarly bizarre lines.

A very fundamental law of physics says that quantum information can never disappear. But this is precisely what seems to happen in black holes. An inevitable consequence of several fundamental laws of physics is that heat must be released into space from near the black hole. This in turn should cause the black hole to shrink. As more heat is released, slowly, over time, the black hole will eventually disappear – leaving nothing behind but heat.

Whatever the black hole had swallowed would have carried its quantum information with it. However, there is no quantum information present in the heat.

Did quantum information really disappear? A fundamental law says it can’t. So could it have come out of the black hole? Again, several fundamental laws say that’s not possible either. So what is going on? Which of these laws is lying?

The case remains unsolved. We’ve learnt a lot from interrogating the laws but we haven’t ruled any of them out as suspects.

§
Illustration: JohnsonMartin/pixabayPIN IT

In 1915, Albert Einstein completed his theory of general relativity by writing down the equation that encapsulated the laws of gravitation – which we now call Einstein’s equation. In a few months, the German physicist Karl Schwarzschild found the first exact solution to Einstein’s equation, although it took physicists a couple decades to fully understand it.

Schwarzschild had found that Einstein’s theory appeared to allow for certain special regions in spacetime where something that entered the region once would never be able to escape back out – not even light John Wheeler later dubbed them ‘black holes’.

A second curious feature of Schwarzschild’s solution was that, if you went closer and closer to the centre of a black hole, the black hole’s gravitational force would just keep getting stronger, and right at the centre become infinite. This point, called a singularity, is where the theories of relativity are expected to break down, and become utterly useless at predicting what happens there. Anything that falls inside a black hole is doomed to eventually reach the singularity.

The next big step in our understanding of black holes was taken in 1938, by a forgotten Indian physicist named B. Datt (and independently by J. Robert Oppenheimer and Hartland Snyder a year later). Datt (and Oppenheimer-Snyder) showed that black holes are not just theoretical possibilities – they could be quite real objects, as the products of naturally occurring processes, waiting to be discovered in outer space.

We know today that if we packed a certain amount of matter into a sufficiently small volume of space, it will inevitably lead to the formation of a black hole. This is just what happens when the core of a sufficiently massive star can no longer perform nuclear fusion, and implodes under its own weight to become a black hole. So the work of Datt (and Oppenheimer-Snyder) indicated that black holes must be out there, waiting to be found by our telescopes.

As of 2020, we’ve spotted hundreds; astrophysicists have even estimated there could be 10 million to a billion in just the Milky Way galaxy.

The 1950s and 1960s were called the golden age of general relativity: physicists made rapid strides in this area of physics at this time. For example, they found there are different types of black holes – ones that spin and ones that don’t, ones that carry electromagnetic charge and ones that don’t, with different combinations of these attributes.

It was around this time that physicists proved what came to be called the ‘no hair’ theorem. The ‘no hair’ theorem states if two black holes have the same mass, electromagnetic charge and angular momentum (the momentum of its spin), then their behaviour would be completely identical.

In other words, if you’re observing a black hole from the outside, and you’ve learnt these three pieces of information, there is no further detail you can hope to learn about a black hole. The theorem’s name alludes to bald pates: without hair, there isn’t much with which to tell them apart.

By extension, you can discern no information about anything that has fallen inside from the outside – except their contribution to the mass, charge and angular momentum of the black hole itself.

§
Image: Engin_Akyurt/pixabayPIN IT

As our understanding of space, time and gravity grew in leaps and bounds in the first half of the 20th century, so did our understanding of matter. Broadly speaking, physicists learned quantum theory could describe the properties of all matter and their interactions – barring those mediated by gravity.

For our discussion, we need to bear two things about quantum theory in mind.

First, any physical object – an electron, a car, the moon, whatever – can be described by a mathematical state at any point of time. Quantum theory is a probabilistic theory – all its predictions are in terms of probabilities. It can’t tell us where exactly an electron will be found in an experiment but it can tell us what the probability of finding it in a given location will be.

The mathematical state contains the table of all such probabilities, and encodes all possible knowledge of the object. That is, the set of probabilities for all possible experiments that can be performed on the object at that time can be read from the state.

The knowledge of the state is called quantum information.

Second, if we know the state of an object at one point of time, we can predict what its state will be at a future point as well as figure out what its state was at any point in its history. This is a different way to say the knowledge of the state is never lost; it’s always encoded in the future state. Ergo, quantum information never disappears.

In the late 1960s and 1970s, physicists working on general relativity started incorporating quantum theory in their work. Of the many questions they were interested in, one was to ask what quantum theory predicts about black holes.

Stephen Hawking provided the first fully rigorous solution to this problem – but his findings surprised the community, including Hawking himself. Hawking had found (on paper) that radiation is released into space from the immediate vicinity of a black hole – not the black hole itself – in a process that causes the black hole to slowly lose energy and shrink. Eventually, the black hole disappears entirely, leaving behind just the radiation.

Also read: The Little Known Calcutta Scientist Whose Shoulders Hawking Stood On

This sounds innocuous – but the kicker is that the radiation is thermal, i.e. heat. By studying it and measuring it, physicists can only extract information about the black hole’s mass, electromagnetic charge and angular momentum at the time the heat was emitted – the three quantities the no-hair theorem says completely describe a black hole to anyone on the outside. This is to be expected: the radiation also comes from just outside of the black hole, so the most it can tell us will have to do with these three properties.

Thermal radiation is purely random, and can’t encode any information.

However, the laws of quantum theory should allow us to deduce an object’s state in the past based on its state in the present. But for black holes the final state appears to be just wisps of heat with information about nothing but mass, charge and angular momentum. Quantum information went in and all that came out was randomness.

This is the black hole information loss paradox.

Stephen Hawking taking a zero-gravity flight in 2007. Credit: Jim Campbell/Aero-News Network/Wikimedia CommonsPIN IT

Since Hawking first hit upon the solution in his calculations, physicists have derived and re-derived his result in different ways, always to the same conclusion. The paradox follows straightforwardly from the same fundamental principles that hold in all other known physical situations, without causing any other paradoxes.

Scientists have proposed different competing ways to resolve the paradox. Most of them pin the blame on one of the fundamental principles and ask physicists to sacrifice it. Some others hold that hitherto unknown laws of physics could come to the rescue and eliminate the paradox.

A proposal called the remnant scenario is of the latter type. When gravitational forces are very strong, for example close to the singularity at the centre of a black hole, the laws of general relativity start breaking down. In these situations, we expect a deeper theory to take over – a theory of quantum gravity. We still don’t know what the correct theory of quantum gravity is – string theory is a leading candidate – but we know it has to exist.

In the late stages of its evaporation, a black hole becomes so small that the rules of quantum gravity must take over. Physicists have hypothesised that the rules of quantum gravity could predict that the evaporation stops when the black hole is small, leaving behind a tiny black hole called the remnant. Then there is no paradox: the remnant will still contain the objects the erstwhile black hole swallowed, and their states will continue to encode information about their past and future.

Sure, we can’t access these objects or their information from the outside, but this limitation doesn’t contradict quantum theory.

Another version of the remnant scenario hypothesises that once the laws of quantum gravity take over, the Hawking radiation will no longer be thermal and that radiation encoding all the missing information will be released at once into spacetime.

One objection to the remnant scenario is that the remnant might be too small to contain so much information. Supporters of the scenario have in turn pointed out that there are some convoluted geometrical structures that look small from the outside but have enough volume to contain the information. But we don’t know if such solutions would be stable from quantum theory’s perspective, and of course we don’t know what the rules of quantum gravity will actually say.

Another idea is that the radiation coming from near black holes is only approximately thermal, that there are subtle deviations from randomness that encode the quantum information of all objects that fell in. In this scenario, the seemingly lost quantum information comes out with the radiation.

This idea differs from the remnant scenario in that the radiation is assumed to be non-thermal right from the start, as opposed in very late stages of the black hole’s evaporation. But there’s a problem: the no-hair theorem restricts the amount of information we can have about the black hole that presents on the outside, so how can the radiation coming from just outside the black hole know more about it?

So it must be that the radiation somehow knows about the inside, even though not even light could have escaped from the inside. This is forbidden in physics by the principle of locality, which says information can’t be transmitted between two points unless light – or any other physical signal – can travel from one to the other. Which means this idea can pan out only if we discard the principle of locality.

Also read: A 200-Year-Old Experiment Has Helped Us See a Black Hole’s Shadow

A third idea holds that black holes are very different from what general relativity tells us. According to string theory, for example, as black holes shrink, they’ll be replaced by more complicated quantum objects called fuzzballs. And unlike black holes, fuzzballs have plenty of hair, so the radiation coming from their vicinity will no longer be short on information.

If you looked at a fuzzball from a distance, you wouldn’t be able to tell it apart from a conventional black hole. But if you got really, really close, they’d look very different. Near the surface of a big-enough black hole, gravity is really weak – and the fuzzball idea proposes that general relativity is modified even in this situation. A cherished belief among physicists – true in all known cases – is that unknown physics only comes into play when really high energies are involved, like in the remnant scenario. Modifying general relativity at low energies like the fuzzball idea requires us to ditch this principle.

There’s yet another possibility, but it’s not popular: that the law of conservation of quantum information is simply incorrect, that quantum information can disappear, and that’s that.

We have made great advances in physics in the last 120 years or so, propelled by the twin revolutions of general relativity and quantum theory. Between them, these two theories can explain everything from the behaviour of subatomic particles to collisions of black holes billions of years ago.

However, the interplay of these two theories gives us the black hole information loss paradox. Physicists have learned a lot about the universe and conceived many fascinating ideas through attempts to resolve it. But the paradox itself remains mystifying, out of reach of explanation. Then again, what’s a good whodunit if it doesn’t make you work hard to solve the mystery?

The reveal – whenever one comes – will not only teach us something deep about nature, it also promises to be great fun. 😈


Nirmalya Kajuri is a theoretical physicist. He is currently a postdoctoral fellow in the Chennai Mathematical Institute.