Hack The Future

SHARE

On Thursday December 8th Cronos organised their annual Hack The Future Event. This experience-driven hackathon at the unique location of the Fort in Edegem is an unique opportunity for students to get a taste of the inspiring jobs that awaits them. Prizes include job offers, paid trips and new equipment. Eight teams took on the Big Data challenge organised by Big Industries.

 

 Hack_The_Future_2.png

Scenery

Together with the first 196 human beings you’ve landed on Mars. Conditions are bad. Oxygen and water run low. A meteorite storm is coming. There’s an unknown disease spreading fast, they think it’s nanobots. The AI indicates 12 hours left before the pressure tanks implode.

The challenge

Build a large scale weather monitoring and alerting solution (using Amazon Compute Cloud) to increase your chance of survival. The team that creates the best functioning monitoring application will not only increase their chances to live, but will also go home with a nice price on earth! 

Application flow

The goal is to build a monitoring application on top of Big Data.

  1. Get the correct data from the OpenWeatherMap-API.

The data we will be using for the application will be ingested from OpenWeatherMap. Start by testing your requests to the OpenWeatherMap-API in a friendly UI, e.g. SoapUI.

  1. Create a consistent flow of data from the OpenWeatherMap-API.

Write a script in python, java(script) that you will deploy on AWS Lambda. AWS Lambda will execute the script every 5 minutes. The script will get the data from the OpenWeatherMap-API and write it to Kinesis-Firehose stream. The stream will buffer your data and insert it automatically into your S3-bucket. The stream and bucket are already configured. To upload your code to AWS Lambda you will have to login into the AWS Console with the given credentials in the Dropbox-folder.

  1. Create a Hive table pointing to the data in the S3-bucket.

You need to be able to query the data you have imported from the OpenWeatherMap-API in the S3-bucket. For this part you need to connect to your EMR. Open the Hive-shell and create a table. Hive-SQL is very similar to other SQL-languages. You must define in your CREATE-statement the LOCATION of the S3-bucket and the DELIMITED FIELDS-argument. The content of the S3-bucket can also be reviewed using the AWS Console.

  1. Develop a monitoring application on top of the ingested data.

For this part you are free to use whatever front-end framework you feel comfortable with. You need to be able to visualize the data in your Hive-table. Set up an alerting system and pin-point the locations on a map. For the map we have provided a Geo-Server.

 

And the winners are...

The Lions from Odisee Brussel!!!

Winners_Big_Data_HTF_2016.jpg

Congratulations to Faissal Rasuly and Maciej Szabat for winning the Big Data challenge. They both went home with a nice iPad in their backpack.

 

Interested in an internship at Big Industries?

Ready to set off on a BIG journey?

The top notch technologies we use set us apart from other consultancies