We’re pleased to announce that The Narwhal’s Wake – Feature Documentary Film  is fiscally sponsored by The Redford Center and have a live IndieGoGo Campaign up to fund our Arctic Expedition to film narwhals in Baffin Bay.

 

Advertisements

Narwhals Doing Their Thing.

This summer, with the green light from the Canadian government, oceanic oil and gas exploration is set to begin in Baffin Bay, Canada using seismic cannon testing. The method that the mapping agencies will use is extremely destructive to sea life, and especially to cetaceans – dolphins, whales and narwhals. It just so happens that 90% of the earth’s narwhals call Baffin Bay home. This testing could led to the near extinction of an entire species.The Inuit Mayor of Clyde River, Jerry Natanine, is leading the crusade to protect narwhals against seismic blasting in Baffin Bay this summer. His coalition has been granted a hearing in Toronto on April 15th to appeal the Canadian government’s decision to allow this method of testing. I’ve assembled an all-star team to join me in a journey to Toronto to cover Natanine’s battle and the plight of the narwhals.

Will this be the last narwhal?

Will this be the last narwhal?

The method for mapping the seafloor uses seismic cannons, which are 100,000 times louder than a jet engine (250dB), to penetrate the seafloor and map pockets of underground oil. It’s been proven to deafen and kill whales off of the California Coast and even their environmental impact estimates include large numbers of dolphin and whale fatalities as part of the costs of searching for oil. These cannons send deafening sound waves throughout the ocean every 10 seconds, 24 hours a day, for months. It drowns out whalesong, leaves pods unable to communicate, they become disoriented, and many end up dying as a result. The non-profit ocean conservation group Oceana has an informative video about the practice and successfully thwarted seismic mapping off the Atlantic coast of the U.S.
How Seismic Cannons Work.

How Seismic Cannons Work.

Jerry Natanine, the Inuit Mayor of Clyde River, and his coalition have been granted a hearing in Toronto on April 15th to appeal the Canadian government’s decision to allow mapping this summer. A number of groups, including Greenpeace and Save Our Arctic, are coming together to support the cause and garner more widespread attention.
Cinematographer Peter Mychalcewycz at work in Java, Indonesia.

Cinematographer Peter Mychalcewycz at work in Java, Indonesia.

Our small documentary team included Academy Award nominated producer Vanessa Bergonzoli and cinematographer Peter Mychalcewycz, along with the support of Raul Gasteazoro and Casey Unterman of Black Powder Works.
Mayor Jerry Natanine To Challenge Seismic Blasting In Court.

Mayor Jerry Natanine To Challenge Seismic Blasting In Court.

Our plan is to head to Toronto to cover Mayor Natanine’s appeal, interview the players on both sides, interview a marine biologist that specializes in cetaceans and narwhals, and explore the science behind seismic mapping technologies and their effects on sea life. The second part will be to travel to the hamlet of Clyde River, Baffin Bay and record the Inuit’s relationship to narwhals over the centuries, the migrations of the narwhals themselves, and the waters in jeopardy of being seismically “mapped.”
Head on Over to the Keep Narwhals Real Project Page on Kickstarter to Lend Your Support!

Head on Over to the Keep Narwhals Real Project Page on Kickstarter to Lend Your Support!

Please take a few more minutes to check out the Kickstarter Campaign Here and give what you can (sooner rather than later, it only lasts less than two weeks!)
Thanks for helping to Keep Narwhals Real!
#keepnarwhalsreal
Narwhal Fun Facts!

Narwhal Fun Facts!

Check out the story of how Keep Narwhals Real! got started, what the situation is right now regarding seismic cannon mapping in Baffin Bay, and how the film, ‘The Narwhal’s Wake,’ is progressing.

Thanks to The Studios of Key West for inviting me to speak at Mark Hedden and Marky Pierson’s Slideshow series.

This slideshow requires JavaScript.

This is mind-blowingly beautiful and amazing. Kind of artificial intelligence transcendent. These beautiful images are rendered via a feedback loop in a neural network at Google’s Research Labs to create what it ‘thinks’ it should see. Check them out and a bit of background here and here.

Here’s what Google Labs has to say:

Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.

One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana. By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

Why is this important? Well, we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn’t matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network’s representation of a fork.
Indeed, in some cases, this reveals that the neural net isn’t quite looking for the thing we thought it was. For example, here’s what one neural net we designed thought dumbbells looked like:

There are dumbbells in there alright, but it seems no picture of a dumbbell is complete without a muscular weightlifter there to lift them. In this case, the network failed to completely distill the essence of a dumbbell. Maybe it’s never been shown a dumbbell without an arm holding it. Visualization can help us correct these kinds of training mishaps.

Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.

We must go deeper: Iterations

If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images:

Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory. See our Inceptionism gallery for hi-res versions of the images above and more (Images marked “Places205-GoogLeNet” were made using this network).

The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training. It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.

Please help KEEP NARWHALS REAL! This summer, oil and gas exploration of Baffin Bay, home to 90% of earth’s narwhals, face seismic cannon mapping which can lead to deafness and death in cetaceans. Help raise awareness about the dangers Narwhals face, to document the Inuits fight against big oil, and to #KeepNarwhalsReal!

Two Wander Lost

NarNar hanging with his favorite boys, Jesse Estrin, Pirouz Ganji, and Ian Rowan in Tahoe. Here we were enjoying pizza and beer after the boys had a long day of snowboarding. NarNar said it was one of the best days of his life. Keep narwhals real! Learn more and contribute. See more NarNar the narwhalScreen Shot 2015-03-30 at 12.21.14 PM.

View original post