How to prune our AWS Neptune graph database in order to lower costs and increase response time?


(Kristaps Horns) #1

Name of company/organisation:
Trapps Wise BV

Link to website:
trappswise.nl

Short description of your company/organisation:
We are the “Marine Traffic” of the inland waterways. We provide API’s and products built with these API to allow other stakeholders to monitor ship traffic in France, Germany, Netherlands and Belgium with an aim to lower CO2 emissions and foster multilateral trade.

Describe the challenge you are facing:
We use a graph database to identify geographic points of interest along waterways and to monitor their relationships in an efficient manner. Using the graph database we can calculate the shortest path between two ports, and identify all points of interest along the way, such as locks, anchorages and bridges. Using this information we can provide information to individual ships regarding their “time to arrival” and other types of information relevant to the marine and logistics industries. We acquire this data rather randomly, so the data points written to the database must be pruned time to time in order to lower the number of nodes and edges while preserving the desired accuracy.

Why is the challenge important to your company/organisation?
Lower number of nodes and edges lowers the response time and helps to lower costs over time.

Are you looking for one student or a team of students?
Do not care.

What are the language requirements to work on this challenge?
English is our primary working language.

How many hours do you estimate this challenge might take?
80


(MM) #2

Which specific skills are required to have a reasonable expectation of solving the challenge?


(system) closed #3

This topic was automatically closed 5 hours after the last reply. New replies are no longer allowed.