Skip to content

Commit 482b6d1

Browse files
mitchdawson1982MatMooreLavMatt
authored
Dp 372 automate local development variable generation (#434)
* add .env.tpl env template file * Add MOJ internal service header (#405) * Add MOJ internal service header The main links are now in a primary nav component. This should go below the phase banner as the banner is supposed to touch the black header. I've also changed the phase from alpha -> beta, and changed the capitalization in the service name. * Remove commented out html * Populate domains drop down with what's been ingested in datahub (#407) * Add missing domain information from charts * Update search tests that hit datahub dev - remove entity which is not currently present - enable the no_duplicates test (we have fixed this) * Load the list of domains from Datahub Previously we hardcoded the list of domains shown in the search filter, and had different lists per environment. This was useful in alpha when we had some junk domains we wanted to filter out, but now we're at a point where every domain in Datahub should be one we want to use. This commit means we now fetch every domain that has something linked to it, and display that in alphabetical order. * Move domain model to models and remove unused model * Refacotr: decouple SearchFacetFetcher from DomainModel * Cache facets fetched from datahub Ideally we would just fetch the facets once per request, but in practice we do this from a few different places. 1. In the view we instantiate a SearchService, which uses the domain model in constructing filters for Datahub. 2. The SearchForm also needs them to know what choices are valid, so we need to pass a callback to the form's ChoiceField. That callback does not share any data with the view. Caching the value is a quick way to avoid making extra requests for the same data. * Hide subdomains if there aren't any defined This is the case at the moment, because the domain model we've pulled in from CaDeT doesn't have subdomains. This might change later though so I don't want to remove the subdomain code completely. * Include missing domains Previously it was only returning domains with tables in. We should include any that show as non-empty in Find MOJ Data. * Cleanup - bring through tags and glossary terms consistently, and remove dead code for data products (#418) * Extend tags and include glossary terms in search results * Remove remaining references to data product This is currently unused, because we no longer include data products in the search. * Set chromedriver path to one installed by setup-chromedriver * Remove metrics ingres config (#425) remove allowed subnets * Show when stuff is an ESDA (#421) * Show when stuff is an ESDA This is only shown on a handful of assets. Also remove metadata fields we have not populated yet (these will always display as blank) * Correct casing * Metrics ingress test (#428) * remove allowed subnets * change ecr_region from input to var * Update workflow variable assignments (#431) * add replaces vars with inputs * Remove inputs and pull vars from respective environements * Fmd 366 add dataset lineage link (#416) * add upstream and downstream lineage to getDatasetDetails graphql query * refactor parse_relations() helper to handle more relations * add upstream and downstream lineage to RelationshipType enum * update parse_relations() input args * update parse_relations() input args in search * add has_lineage and lineage_url to dataset details context * add lineage link to details_table template * remove redundant block in query for data product relationships * return entity name for lineage * have only 1 RelationshipType for lineage * simplfy `parse_relations()` helper function * update DatasetDetails to use single lineage type * align url to rest of table * update tests * add default value for url * design suggestions for lineage label, from Alex and Jess * spell it right * suggestions from Mat * update readme * remove .env.example --------- Co-authored-by: Mat <MatMoore@users.noreply.github.com> Co-authored-by: Matt <38562764+LavMatt@users.noreply.github.com>
1 parent f6c1e10 commit 482b6d1

File tree

3 files changed

+43
-23
lines changed

3 files changed

+43
-23
lines changed

.env.example

-19
This file was deleted.

.env.tpl

+20
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Django Development Variables
2+
DEBUG=op://Data-Catalogue/Find-Moj-Data/${ENV}/Django-Debug
3+
SECRET_KEY=op://Data-Catalogue/Find-Moj-Data/${ENV}/Django-Secret-Key # pragma: allowlist secret
4+
DJANGO_ALLOWED_HOSTS=op://Data-Catalogue/Find-Moj-Data/${ENV}/Django-Allowed-Hosts
5+
DJANGO_LOG_LEVEL=op://Data-Catalogue/Find-Moj-Data/${ENV}/Django-Log-Level
6+
7+
# Catalogue Variables
8+
CATALOGUE_TOKEN=op://Data-Catalogue/Find-Moj-Data/${ENV}/Catalogue-Token
9+
CATALOGUE_URL=op://Data-Catalogue/Find-Moj-Data/${ENV}/Catalogue-Gms-Url
10+
11+
# Azure Variables
12+
# Any value other than 'false' to enable Azure Auth
13+
AZURE_AUTH_ENABLED=op://Data-Catalogue/Find-Moj-Data/${ENV}/Azure-Auth-Enabled
14+
AZURE_CLIENT_ID=op://Data-Catalogue/Find-Moj-Data/${ENV}/Azure-Client-ID
15+
AZURE_CLIENT_SECRET=op://Data-Catalogue/Find-Moj-Data/${ENV}/Azure-Client-Secret # pragma: allowlist secret
16+
AZURE_REDIRECT_URI=op://Data-Catalogue/Find-Moj-Data/${ENV}/Azure-Redirect-Uri
17+
AZURE_AUTHORITY=op://Data-Catalogue/Find-Moj-Data/${ENV}/Azure-Authority
18+
19+
# Sentry Variables
20+
SENTRY_DSN__WORKAROUND=op://Data-Catalogue/Find-Moj-Data/${ENV}/Sentry-Dsn

README.md

+23-4
Original file line numberDiff line numberDiff line change
@@ -4,23 +4,42 @@
44

55
## Quick start
66

7-
You will need npm (for javascript dependencies) and poetry (for python dependencies).
7+
Please refer to Prerequisites for dependencies and installation instructions
88

99
1. Run `poetry install` to install python dependencies
1010
1. Run `npm install` to download frontend static dependencies.
1111
1. Run `poetry run python -m nltk.downloader punkt` to install nltk data
12-
1. Copy `.env.example` to `.env`.
13-
1. You wil need to obtain an access token from Datahub catalogue and populate the
14-
`CATALOGUE_TOKEN` var in .env to be able to retrieve search data.
12+
1. Set the `ENV` var to `local` i.e. `export ENV=local`
13+
1. Run `op inject --in-file .env.tpl --out-file .env` to generate a compatible `.env` file
14+
1. Optionally substitute value for `CATALOGUE_TOKEN` var in .env with your own PAT value to be able to retrieve search data.
1515
1. Run `poetry run python manage.py runserver`
1616

1717
```sh
1818
poetry install --no-root
1919
npm install
2020
poetry run python -m nltk.downloader punkt
21+
export ENV=local
22+
op inject --in-file .env.tpl --out-file .env
2123
poetry run python manage.py runserver
2224
```
2325

26+
# Prerequisites
27+
28+
## Npm
29+
Required for building the front end javascript dependencies
30+
31+
## Poetry
32+
Required for managing python package dependencies.
33+
Follow installation instructions here https://python-poetry.org/docs/#installation
34+
35+
## 1Password
36+
Organisational level tool for storing application secrets and passwords securely.
37+
There are a number of 1password utilities available to manage credentials from cli and desktop environments.
38+
39+
1. Install the 1Password desktop app - https://support.1password.com/get-the-apps/
40+
2. Install the 1Password CLI app - https://developer.1password.com/docs/cli/get-started/
41+
3. Follow the steps to turn on and test the 1password desktop app integration
42+
2443
## Current Endpoints
2544

2645
/search

0 commit comments

Comments
 (0)