|
|
## About
|
|
|
Step by step, this page explains how to:
|
|
|
* set-up your working environnement
|
|
|
* write a new fetcher for **dbnomics plateform**.
|
|
|
* write a new fetcher for **dbnomics plateform**, or contribute to an existing one
|
|
|
* test this fetcher against the "DBnomics validation script"
|
|
|
|
|
|
### Changelog
|
|
|
## Changelog
|
|
|
|Date |Section |Comments|
|
|
|
|-----|--------|--------|
|
|
|
| 2019/05/22 | ALL | Huge rewrite / update |
|
|
|
| 2018/02/22 | [dbnomics-data-model](####data-model) | Updated with json-schema informations on [`sample-json-data-tree` directory in `dbnomics-data-model` repo](https://git.nomics.world/dbnomics/dbnomics-data-model)|
|
|
|
|
|
|
|
|
|
### Requirements
|
|
|
## What is a fetcher ?
|
|
|
|
|
|
* Git
|
|
|
* Python3
|
|
|
* virtualenv
|
|
|
A fetcher is a set of 2 scripts, written in any language (but Python is used in this doc, and used for [all fetchers developped by DBnomics team](https://git.nomics.world/dbnomics-fetchers)).
|
|
|
|
|
|
### Create your environnement
|
|
|
The two scripts are:
|
|
|
- the dowloader: `download.py` is responsible of downloading data from the provider (the place where data is available)
|
|
|
- in the context of DBnomics Gitlab CI, this data is commited to corresponding `source-data` repository
|
|
|
- the converter: `convert.py` is responsible of converting those data from provider's format to DBnomics's format
|
|
|
- in the context of DBnomics Gitlab CI, this data is commited to corresponding `json-data` repository
|
|
|
|
|
|
Inside your working directory
|
|
|
* Create a `virtualenv` for DBNOMICS with python3:
|
|
|
### Download process
|
|
|
- `download.py` downloads the data from the provider and put this data **without changing the format** in a directory somewhere, given in scripts arguments
|
|
|
- if the source data is a zip file, the downloader unzip the files but keep the original files format
|
|
|
- when the script is executed in the context of a [Gitlab CI job](https://docs.gitlab.com/ee/ci/introduction/), ie `download.py` is executed by the bash script in [`.gitlab-ci.yml` file of the fetcher](https://git.nomics.world/dbnomics-fetchers/wb-fetcher/blob/master/.gitlab-ci.yml), the downloaded data is **commited** to the corresponding *source-data* git repository. Example: [Worldbank fetcher](https://git.nomics.world/dbnomics-fetchers/wb-fetcher) put **Worldbank source data** in [wb-source-data](https://git.nomics.world/dbnomics-source-data/wb-source-data) repository.
|
|
|
|
|
|
```bash
|
|
|
me@mylaptop:~$ virtualenv --python=python3 nomics_env
|
|
|
```
|
|
|
### Convertion process
|
|
|
- `convert.py` converts the data downloaded by `download.py` onto DBnomics format and put the resulting data in a directory somewhere, given in scripts arguments
|
|
|
- the format of this data is described in [dbnomics-data-model](https://git.nomics.world/dbnomics/dbnomics-data-model) (this will be described later)
|
|
|
- when the script is executed in the context of a [Gitlab CI job](https://docs.gitlab.com/ee/ci/introduction/), ie `convert.py` is executed by the bash script in [`.gitlab-ci.yml` file of the fetcher](https://git.nomics.world/dbnomics-fetchers/wb-fetcher/blob/master/.gitlab-ci.yml), the converted data is **commited** to the corresponding *json-data* git repository. Example: [Worldbank fetcher](https://git.nomics.world/dbnomics-fetchers/wb-fetcher) put **Worldbank json data** in [wb-json-data](https://git.nomics.world/dbnomics-json-data/wb-json-data) repository.
|
|
|
|
|
|
* activate the virtualenv
|
|
|
```bash
|
|
|
me@mylaptop:~$ source nomics_env/bin/activate
|
|
|
(nomics_env) me@mylaptop:~$
|
|
|
```
|
|
|
## Steps to write/contribute to a fetcher
|
|
|
|
|
|
### Architecture of a DBNOMICS Fetcher
|
|
|
- clone the fetcher, it contains the download and the convert scripts
|
|
|
- or
|
|
|
- Write or clone the download script
|
|
|
- Write or clone the convert script (that use downloaded data from download script)
|
|
|
- Make some changes
|
|
|
- Validate converted data (using [a script](https://git.nomics.world/dbnomics/dbnomics-data-model/blob/master/dbnomics_data_model/validate_storage.py) available in dbnomics-data-model project)
|
|
|
|
|
|
A Fetcher is composed of modules or bricks:
|
|
|
* Data-model: dbnomics-data-model
|
|
|
* Source-data: dbnomics-source-data
|
|
|
* JSON-data: dbnomics-json-data
|
|
|
* Fetcher: dbnomics-fetcher
|
|
|
All those steps will be descripbed below
|
|
|
|
|
|
### Prepare your environnement
|
|
|
|
|
|
For pedagogical purpose we will create the tree for dbnomics project by creating:
|
|
|
- dbnomics-source-data
|
|
|
- dbnomics-json-data
|
|
|
- dbnomics-fetcher
|
|
|
#### [Optional] - Use a Python virtual env
|
|
|
|
|
|
and by **cloning** from source:
|
|
|
- dbnomics-data-model
|
|
|
|
|
|
```bash
|
|
|
(nomics_env) me@mylaptop:~$ mkdir dbnomics-source-data
|
|
|
(nomics_env) me@mylaptop:~$ mkdir dbnomics-json-data
|
|
|
(nomics_env) me@mylaptop:~$ mkdir dbnomics-fetchers
|
|
|
```
|
|
|
|
|
|
At the end of the procedure a DBNOMICS fetcher should be ordered in your computer like this:
|
|
|
|
|
|
```bash
|
|
|
(nomics_env) me@mylaptop:~$ tree . -L 2
|
|
|
.
|
|
|
├── dbnomics-data_model
|
|
|
│ ├── dbnomics_data_model
|
|
|
│ ├── setup.cfg
|
|
|
│ └── setup.py
|
|
|
├── dbnomics-fetchers
|
|
|
│ └── <provider_slug>-fetcher
|
|
|
├── dbnomics-json-data
|
|
|
│ └── <provider_slug>-json-data
|
|
|
└── dbnomics-source-data
|
|
|
└── <provider_slug>-source-data
|
|
|
Inside your working directory
|
|
|
|
|
|
```
|
|
|
* Create a `virtualenv` for DBNOMICS with python3:
|
|
|
|
|
|
All folders inside source-data and json-data **MUST** follow the naming conventions:
|
|
|
- `<provider_slug>-source-data`
|
|
|
- `<provider_slug>-json-data`
|
|
|
- `<provider_slug>-fetcher`
|
|
|
```bash
|
|
|
virtualenv --python=python3 dbnomics_env
|
|
|
```
|
|
|
|
|
|
#### data-model
|
|
|
* activate the virtualenv
|
|
|
|
|
|
Data-model defines the json data model of DB.nomics. Each new json-data produced by a fetcher must be compliant with this json-data model.
|
|
|
The expected format of the data produced by your fetcher is represented in the
|
|
|
* tree-sample-json-data
|
|
|
with a set of requirements and constraints that you have to validate using data-model script
|
|
|
```bash
|
|
|
source dbnomics_env/bin/activate
|
|
|
```
|
|
|
|
|
|
#### Prepare destination folders
|
|
|
|
|
|
First, let's install it inside the dbnomics virtual env
|
|
|
Download script need an existing folder to put *source data* in, and the converter script needs an existing folder to put *json data* in. We could name this two directories freely; but later we will use the *validation script* to test if our json-data is correct regarding to DBnomics data model, and this script wants the json-data to be named like: `[provider_slug]-json-data`.
|
|
|
|
|
|
* `clone` the `repository` dbnomics/data_model
|
|
|
So we're used to name those folders `[provider_slug]-source-data` and `[provider_slug]-json-data`. In our example, the slug used for Worldbank is `wb` so:
|
|
|
|
|
|
* In ssh mode: you have to previously add your ssh_key to your profile on git.nomics.world.
|
|
|
Inside your working directory
|
|
|
|
|
|
```bash
|
|
|
```bash
|
|
|
mkdir wb-source-data
|
|
|
mkdir wb-json-data
|
|
|
```
|
|
|
|
|
|
(nomics_env) me@mylaptop:~$ git clone git@git.nomics.world:dbnomics/dbnomics-data-model.git
|
|
|
(if you're creating a fetcher from scratch, you can decide the provider slug but have a look in the [official fetchers list](https://git.nomics.world/dbnomics-fetchers) before to check that this fetcher slug is available)
|
|
|
|
|
|
```
|
|
|
You will have to add your fingerprint to the server
|
|
|
#### Clone or create a fetcher
|
|
|
|
|
|
* Using https:
|
|
|
##### Clone an existing fetcher
|
|
|
|
|
|
```bash
|
|
|
In this example we'll clone [Worlbank fetcher](https://git.nomics.world/dbnomics-fetchers/wb-fetcher)
|
|
|
|
|
|
(nomics_env) me@mylaptop:~$ git clone https://git.nomics.world/dbnomics/dbnomics-data_model
|
|
|
Inside your working directory
|
|
|
|
|
|
```
|
|
|
* Check if clone is ok
|
|
|
```
|
|
|
git clone https://git.nomics.world/dbnomics-fetchers/wb-fetcher.git
|
|
|
```
|
|
|
|
|
|
```bash
|
|
|
##### Install fetcher dependencies
|
|
|
|
|
|
(nomics_env) me@mylaptop:~$ ls -dbnomics-data_model/
|
|
|
(nomics_env) me@mylaptop:~$ build dbnomics_data_model.egg-info README.md setup.py
|
|
|
dbnomics_data_model dist scripts
|
|
|
```
|
|
|
Yes, because fetchers often depends on some third-party libraries.
|
|
|
|
|
|
* Install the package
|
|
|
```
|
|
|
pip install -r requirements.txt
|
|
|
```
|
|
|
|
|
|
```bash
|
|
|
(nomics_env) me@mylaptop:~$ pip install -e dbnomics-data_model/
|
|
|
```
|
|
|
Add it the packags requirements with the current version
|
|
|
##### Create an existing fetcher
|
|
|
|
|
|
> When pulling dbnomics-converters think about reinstalling the current version
|
|
|
with :
|
|
|
```pip install -e dbnomics-data_model/```
|
|
|
A small part of fetcher code is common to every fetcher, so to avoid starting from scratch we created a [cookiecutter](https://git.nomics.world/dbnomics/dbnomics-fetcher-cookiecutter) (ie a template).
|
|
|
|
|
|
Now it is installed, let's use it!
|
|
|
Follow `README.md` of cookiecutter repo to get started.
|
|
|
|
|
|
* First check the informations about the package
|
|
|
```sh
|
|
|
(nomics_env) me@mylaptop:~$ pip show dbnomics-data-model
|
|
|
Name: dbnomics-data-model
|
|
|
Version: 0.7.1
|
|
|
Summary: Define and validate DB.nomics data model.
|
|
|
Home-page: https://git.nomics.world/dbnomics/dbnomics-data-model
|
|
|
Author: Christophe Benz
|
|
|
Author-email: christophe.benz@cepremap.org
|
|
|
License: https://www.gnu.org/licenses/agpl-3.0.en.html
|
|
|
Location: /home/me/dbnomics/dbnomics-data-model
|
|
|
Requires: dulwich, jsonschema, ujson
|
|
|
```
|
|
|
### Now you should get ready
|
|
|
|
|
|
* Inside your fetcher you *MUST* declare the version of data-model you are using
|
|
|
Your working directory may look like:
|
|
|
|
|
|
Example:
|
|
|
``` python
|
|
|
.
|
|
|
├── wb-fetcher
|
|
|
├── wb-source-data
|
|
|
└── wb-json-data
|
|
|
|
|
|
DATA_MODEL_VERSION = "0.7.1"
|
|
|
```
|
|
|
You're ready to start modifying cloned fetcher / edit cookiecutter to start a new fetcher.
|
|
|
|
|
|
The json-data produced by your fetcher must be similar to the `tree-sample-json-data` stored in dbnomics-data-model folder
|
|
|
### Generate json-data
|
|
|
|
|
|
Here are the expected files and directories tree :
|
|
|
Here are the general look of files and directories that constitute a json-data directory (what will be created by the convert script):
|
|
|
|
|
|
```
|
|
|
[my_provider]-json-data
|
|
|
/
|
|
|
|- categories_tree.json
|
|
|
|- datapackage.json
|
|
|
|- provider.json
|
|
|
|- dataset1
|
|
|
| |- dataset.json
|
|
|
| |- A1.B1.C1.tsv
|
|
|
|- category_tree.json <-- [not required] metadata about datasets categorization (in a tree)
|
|
|
|- provider.json <-- metadata about this provider
|
|
|
|- dataset1 <-- a dataset folder
|
|
|
| |- dataset.json <-- the file containing this dataset metadata
|
|
|
| |- A1.B1.C1.tsv <-- a dataset's series
|
|
|
| |- A1.B1.C2.tsv
|
|
|
| |- A1.B2.C1.tsv
|
|
|
| |- A1.B2.C2.tsv
|
... | ... | @@ -172,178 +136,81 @@ Here are the expected files and directories tree : |
|
|
| |- etc.
|
|
|
```
|
|
|
|
|
|
Json-data schema expresses a set of constraints:
|
|
|
|
|
|
- The repository directory name MUST be equal to the provider code + "-json-data".
|
|
|
- Each dataset directory name MUST be equal to the dataset code.
|
|
|
- Conversions MUST be stable: 2 executions of the conversion script MUST be equivalent to one.
|
|
|
|
|
|
That you will have to validate throught script
|
|
|
|
|
|
##### Validate your data
|
|
|
* Validate the hierarchy of the data produced using the script provided in data-model folder
|
|
|
|
|
|
```sh
|
|
|
./scripts/test_tree_sample.sh
|
|
|
```
|
|
|
|
|
|
* Validate a JSON data Git repository using the script provided in data-model:
|
|
|
|
|
|
|
|
|
```sh
|
|
|
./scripts/validate_json_data_git_repository.py <git_repo_dir>
|
|
|
|
|
|
# for example:
|
|
|
./scripts/validate_json_data_git_repository.py wto-json-data
|
|
|
```
|
|
|
|
|
|
:bulb: If some of your data doesn't fit the model and the model need some additionanl constaint you can add into the tree example and make a PR on
|
|
|
the data-model repo upgrading the version
|
|
|
#### Source Data
|
|
|
|
|
|
Source data is a folder and a git repository where the raw datasets will be put i.e a raw deposit of provider sources files.
|
|
|
|
|
|
|
|
|
|
|
|
* Create an empty repository for **source-data**:
|
|
|
|
|
|
Inside https://git.nomics.world/dbnomics-source-data click on `New Project`
|
|
|
|
|
|
Name your project following the naming convention:
|
|
|
|
|
|
`<provider_slug>-source-data`
|
|
|
|
|
|
Add a description for the project following this pattern:
|
|
|
|
|
|
`Series from <provider_name> (acronym explaination) macro economic database in source format`
|
|
|
|
|
|
> Let the visibility to public
|
|
|
|
|
|
|
|
|
* Clone it inside your dbnomics-source-data
|
|
|
|
|
|
```bash
|
|
|
(nomics_env) me@mylaptop:~$ cd dbnomics-source-data/
|
|
|
(nomics_env) me@mylaptop:~/dbnomics-source-data/$ git clone git@git.nomics.world:dbnomics-source-data/<provider_slug>-source-data.git
|
|
|
(nomics_env) me@mylaptop:~/dbnomics-source-data/$ cd <provider_slug>-data
|
|
|
```
|
|
|
|
|
|
> Your fetcher's script <provider_slug>_to_data_source.py will populate this dedicated repository: <provider_slug>-data-source
|
|
|
with the targeted datasets.
|
|
|
note:
|
|
|
- you can have a look to [existing json-data repos](https://git.nomics.world/dbnomics-json-data) fore real world examples
|
|
|
- you can also have a look to [dbnomics-data-model fixtures folder](https://git.nomics.world/dbnomics/dbnomics-data-model/tree/master/tests/fixtures) fore fake examples (used to test the data model)
|
|
|
|
|
|
#### JSON Data
|
|
|
#### Using jsonl files
|
|
|
|
|
|
JSON data is a git repository where results of the conversion process from datasets (source-data repository) to **db-nomics datasets** ( json-data) are stored
|
|
|
When a dataset contains a huge number of time series (around 1000), the `dataset.json` file grows drastically. In this case, the use of `series.jsonl` files ([JSON-lines](http://jsonlines.org/) format) is recommended because parsing a JSON-lines file line-by-line consumes less memory than opening a whole JSON file.
|
|
|
|
|
|
* Create an empty repository for **json-data**:
|
|
|
#### Going further onto details
|
|
|
|
|
|
Inside https://git.nomics.world/dbnomics-json-data click on `New Project`
|
|
|
For complete documentation about the structure of those files, please refer to the [Storing time series](https://git.nomics.world/dbnomics/dbnomics-data-model/blob/master/README.md#storing-time-series) section of the README of the data model project.
|
|
|
|
|
|
Name your project following the naming convention:
|
|
|
### Validate your json-data
|
|
|
|
|
|
`<provider_slug>-json-data`
|
|
|
Generating valid data is essential for those data to be understood by [DBnomics API](https://git.nomics.world/dbnomics/dbnomics-api), and so displayed on [DBnomics website](https://db.nomics.world/).
|
|
|
|
|
|
Add a description for the project following this pattern:
|
|
|
Some general rules (expressed in [data model](https://git.nomics.world/dbnomics/dbnomics-data-model)) define a set of constraints:
|
|
|
|
|
|
`Series from <providername> (acronym explaination) macro-economic data converted to DBnomics JSON format`
|
|
|
- The repository directory name MUST be equal to the provider code + "-json-data"
|
|
|
- Each dataset directory name MUST be equal to the corresponding dataset code
|
|
|
- Conversions MUST be stable: 2 executions of the conversion script MUST be equivalent to one
|
|
|
- (and many other !)
|
|
|
|
|
|
> Let the visibility to public
|
|
|
Hopefully, a [validation script](https://git.nomics.world/dbnomics/dbnomics-data-model/blob/master/dbnomics_data_model/validate_storage.py) exists to help you validate all of those constraints later.
|
|
|
|
|
|
In the next section, we'll explain how to install DBnomics data model and use this script in details.
|
|
|
|
|
|
* clone it inside the dbnomics-json-data folder
|
|
|
### Install data-model and use validation script
|
|
|
|
|
|
Data-model defines the json data model of DBnomics. Each new json-data produced by a fetcher must be compliant with this json-data model.
|
|
|
|
|
|
```bash
|
|
|
(nomics_env) me@mylaptop:~$ cd dbnomics-json-data/
|
|
|
(nomics_env) me@mylaptop:~/dbnomics-json-data/$ git clone git@git.nomics.world:dbnomics-source-data/<provider_slug>-source-data.git
|
|
|
(nomics_env) me@mylaptop:~/dbnomics-json-data/$ cd <provider_slug>-data
|
|
|
```
|
|
|
|
|
|
#### Fetcher
|
|
|
|
|
|
The Fetcher is a set of two scripts to acquire and transform raw data from source to dbnomics datasets.
|
|
|
First, let's install it inside the dbnomics virtual env
|
|
|
|
|
|
Inside https://git.nomics.world/dbnomics-fetchers create a new project
|
|
|
* `clone` the data_model repo
|
|
|
|
|
|
* Create an empty repository for your fetcher:
|
|
|
```bash
|
|
|
(dbnomics_env) git clone https://git.nomics.world/dbnomics/dbnomics-data-model.git
|
|
|
```
|
|
|
|
|
|
Inside https://git.nomics.world/dbnomics-fetchers click on `New Project`
|
|
|
* Install the package
|
|
|
|
|
|
> Name your project following the naming convention:
|
|
|
`<provider_slug>-fetcher`
|
|
|
```bash
|
|
|
(dbnomics_env) pip install -e dbnomics-data_model/
|
|
|
```
|
|
|
|
|
|
> Add a description for the project following this pattern:
|
|
|
`DBnomics fetcher for series from <provider_name> (accronym explaination if needed) macro economic database`
|
|
|
-> this will install the dbnomics-data-model lib, and especially the validation script available in the virtual env as `dbnomics-validate` command (this magic trick is done in [`setup.py` file of data-model package](https://git.nomics.world/dbnomics/dbnomics-data-model/blob/master/setup.py#L76))
|
|
|
|
|
|
> Let the visibility to public
|
|
|
#### Use validation script to validate your data
|
|
|
|
|
|
So this is the big moment ! You wrote a brand new fetcher, or you fixed a bug in one of existing fetchers (what ?! no way !), you run the `convert.py` script on previously downloaded `source-data` and you want to know wether the generated data are valid.
|
|
|
|
|
|
* clone it inside your dbnomics-fetchers folder:
|
|
|
As we saw in prvious section, the script is available in the virtual env as a shell command: `dbnomics-validate`.
|
|
|
|
|
|
So, to run the validation script on your `json-data`:
|
|
|
|
|
|
```bash
|
|
|
(nomics_env) me@mylaptop:~$ cd dbnomics-fetchers/
|
|
|
(nomics_env) me@mylaptop:~/dbnomics-fetchers/$ git clone git@git.nomics.world:dbnomics-source-data/<provider_slug>-fetcher.git
|
|
|
(nomics_env) me@mylaptop:~/dbnomics-fetchers/$ cd <provider_slug>-fetcher
|
|
|
```sh
|
|
|
dbnomics-validate [my_fetcher]-json-data
|
|
|
```
|
|
|
|
|
|
The fetcher will be composed of three mandatory files:
|
|
|
|
|
|
* <provider_slug>_to_source_data.py
|
|
|
* <provider_slug>_source_data_to_dbnomics.py
|
|
|
* requirements.txt
|
|
|
|
|
|
### Source data
|
|
|
|
|
|
Correspond to script to_source_data.py in your fetcher that populate the source data repoistory.
|
|
|
|
|
|
`<provider_slug>_to_source_data.py` is a script that:
|
|
|
|
|
|
* given a provider
|
|
|
* populate the **source-data** repository
|
|
|
* with the raw data of the provider (specified datasets mentionned in the Analysis)
|
|
|
* by using the most appropriate method
|
|
|
|
|
|
|
|
|
Datasets that have to be stored are listed in the corresponding ***Analysis*** document you will find in the gitlab project dbnomics-fetcher/management along with the corresponding issue
|
|
|
|
|
|
|
|
|
:bulb: replace [my_provider] by the slug of the provider you're working on. See previous "Now you should get ready" section for details.
|
|
|
|
|
|
* Create the file '<provider_slug>_to_source_data.py' inside dbnomics-fetchers/<provider_slug>-fetcher/
|
|
|
Often there's a bunch of errors coming. Don't panic ! The same little fix in converter's code often fix a bunch of errors.
|
|
|
|
|
|
Some useful tips:
|
|
|
|
|
|
* Read the analysis that specify which dataset we want to store and how to access it
|
|
|
|
|
|
* Define the targeted datasets and make assertion check to detect if there is change in the access to the datasets
|
|
|
|
|
|
* Specify the **data-source repository** for your provider into your `<provider_slug>_to_source_data.py`, this script will be executed from CLI by gitlab-CI so it should take at least one argument : the destination for the datasets i.e the specific path of source-data repository corresponding to your provider
|
|
|
|
|
|
### JSON DATA
|
|
|
|
|
|
Correspond to your script to_dbnomics.py in your fetcher that will populate JSON DATA and convert from source data to dbnomics format
|
|
|
|
|
|
|
|
|
`<provider_slug>_to_dbnomics.py` is a script that:
|
|
|
|
|
|
* given a data_source
|
|
|
* populate the **json-data** repository
|
|
|
* with the selected and converted data as mentionned in the Analysis
|
|
|
* by using the most appropriate method and dbnomics-convertors builtins functions to help and validate
|
|
|
|
|
|
Useful tips:
|
|
|
* Open the corresponding Analysis that define the structure and the targeted time series
|
|
|
you will need to extract from raw data stored in source_data
|
|
|
* Don't forget to validate the data produced using dbnomics-data-model
|
|
|
|
|
|
### Requirements
|
|
|
|
|
|
Put in requirements the external packages to run the script
|
|
|
|
|
|
`requirements.txt` example:
|
|
|
Here's the algorythm of a validation process:
|
|
|
|
|
|
```
|
|
|
requests
|
|
|
xlrd
|
|
|
bs4
|
|
|
while True:
|
|
|
fix your code
|
|
|
validation_ok = run the validator
|
|
|
if validation_ok:
|
|
|
# you're done !
|
|
|
break
|
|
|
else:
|
|
|
don't panic
|
|
|
take a breath
|
|
|
```
|
|
|
|
|
|
When the validation script passes on your json-data, you're good to go for a pull request with DBnomics team :) |