You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generally, if you want to emulate production environment use ``docker-compose.production.yml`` instead. And this is true for any other actions you might need to perform: whenever a switch is required, just do it!
38
38
39
+
After we have created our initial image we nee to generate a lockfile for our dependencies.
40
+
Docker cannot write to the host system during builds, so we have to run the command to generate the lockfile in the container.
41
+
This is important for reproducible builds and to ensure that the dependencies are installed correctly in the container.
42
+
Updating the lockfile manually is normally not necessary when you add packages through `uv add <package_name>`.
43
+
44
+
docker compose -f docker-compose.local.yml run --rm django uv lock
45
+
46
+
This is done by running the following command: ::
47
+
48
+
docker compose -f docker-compose.local.yml run --rm django uv lock
49
+
50
+
To be sure we are on the right track we need to build our image again: ::
51
+
52
+
docker compose -f docker-compose.local.yml build
53
+
54
+
55
+
56
+
39
57
Before doing any git commit, `pre-commit`_ should be installed globally on your local machine, and then::
40
58
41
-
$ git init
42
-
$ pre-commit install
59
+
git init
60
+
pre-commit install
43
61
44
62
Failing to do so will result with a bunch of CI and Linter errors that can be avoided with pre-commit.
45
63
@@ -50,27 +68,27 @@ This brings up both Django and PostgreSQL. The first time it is run it might tak
50
68
51
69
Open a terminal at the project root and run the following for local development::
52
70
53
-
$ docker compose -f docker-compose.local.yml up
71
+
docker compose -f docker-compose.local.yml up
54
72
55
73
You can also set the environment variable ``COMPOSE_FILE`` pointing to ``docker-compose.local.yml`` like this::
56
74
57
-
$ export COMPOSE_FILE=docker-compose.local.yml
75
+
export COMPOSE_FILE=docker-compose.local.yml
58
76
59
77
And then run::
60
78
61
-
$ docker compose up
79
+
docker compose up
62
80
63
81
To run in a detached (background) mode, just::
64
82
65
-
$ docker compose up -d
83
+
docker compose up -d
66
84
67
85
These commands don't run the docs service. In order to run docs service you can run::
68
86
69
-
$ docker compose -f docker-compose.docs.yml up
87
+
docker compose -f docker-compose.docs.yml up
70
88
71
89
To run the docs with local services just use::
72
90
73
-
$ docker compose -f docker-compose.local.yml -f docker-compose.docs.yml up
91
+
docker compose -f docker-compose.local.yml -f docker-compose.docs.yml up
74
92
75
93
The site should start and be accessible at http://localhost:3000 if you selected Webpack or Gulp as frontend pipeline and http://localhost:8000 otherwise.
76
94
@@ -79,8 +97,8 @@ Execute Management Commands
79
97
80
98
As with any shell command that we wish to run in our container, this is done using the ``docker compose -f docker-compose.local.yml run --rm`` command: ::
docker compose -f docker-compose.local.yml run --rm django python manage.py migrate
101
+
docker compose -f docker-compose.local.yml run --rm django python manage.py createsuperuser
84
102
85
103
Here, ``django`` is the target service we are executing the commands against.
86
104
Also, please note that the ``docker exec`` does not work for running management commands.
@@ -136,7 +154,7 @@ The three envs we are presented with here are ``POSTGRES_DB``, ``POSTGRES_USER``
136
154
137
155
One final touch: should you ever need to merge ``.envs/.production/*`` in a single ``.env`` run the ``merge_production_dotenvs_in_dotenv.py``: ::
138
156
139
-
$ python merge_production_dotenvs_in_dotenv.py
157
+
python merge_production_dotenvs_in_dotenv.py
140
158
141
159
The ``.env`` file will then be created, with all your production envs residing beside each other.
142
160
@@ -149,15 +167,15 @@ Activate a Docker Machine
149
167
150
168
This tells our computer that all future commands are specifically for the dev1 machine. Using the ``eval`` command we can switch machines as needed.::
151
169
152
-
$ eval "$(docker-machine env dev1)"
170
+
eval "$(docker-machine env dev1)"
153
171
154
172
Add 3rd party python packages
155
173
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
156
174
157
-
To install a new 3rd party python package, you cannot use ``pip install <package_name>``, that would only add the package to the container. The container is ephemeral, so that new library won't be persisted if you run another container. Instead, you should modify the Docker image:
158
-
You have to modify the relevant requirement file: base, local or production by adding: ::
175
+
To install a new 3rd party python package, you cannot use ``uv add <package_name>``, that would only add the package to the container. The container is ephemeral, so that new library won't be persisted if you run another container. Instead, you should modify the Docker image:
176
+
You have to modify pyproject.toml and either add it to project.dependencies or to tool.uv.dev-dependencies by adding: ::
159
177
160
-
<package_name>==<package_version>
178
+
"<package_name>==<package_version>"
161
179
162
180
To get this change picked up, you'll need to rebuild the image(s) and restart the running container: ::
163
181
@@ -176,7 +194,7 @@ If you are using the following within your code to debug: ::
176
194
177
195
Then you may need to run the following for it to work as desired: ::
178
196
179
-
$ docker compose -f docker-compose.local.yml run --rm --service-ports django
197
+
docker compose -f docker-compose.local.yml run --rm --service-ports django
180
198
181
199
182
200
django-debug-toolbar
@@ -190,8 +208,8 @@ docker
190
208
191
209
The ``container_name`` from the yml file can be used to check on containers with docker commands, for example: ::
192
210
193
-
$ docker logs <project_slug>_local_celeryworker
194
-
$ docker top <project_slug>_local_celeryworker
211
+
docker logs <project_slug>_local_celeryworker
212
+
docker top <project_slug>_local_celeryworker
195
213
196
214
Notice that the ``container_name`` is generated dynamically using your project slug as a prefix
197
215
@@ -331,7 +349,7 @@ Assuming that you registered your local hostname as ``my-dev-env.local``, the ce
331
349
332
350
Rebuild your ``docker`` application. ::
333
351
334
-
$ docker compose -f docker-compose.local.yml up -d --build
352
+
docker compose -f docker-compose.local.yml up -d --build
335
353
336
354
Go to your browser and type in your URL bar ``https://my-dev-env.local``.
uv run uvicorn config.asgi:application --host 0.0.0.0 --reload --reload-include '*.html'
83
73
84
74
If you've opted for Webpack or Gulp as frontend pipeline, please see the :ref:`dedicated section <bare-metal-webpack-gulp>` below.
85
75
@@ -136,11 +126,11 @@ Following this structured approach, here's how to add a new app:
136
126
137
127
#. **Create the app** using Django's ``startapp`` command, replacing ``<name-of-the-app>`` with your desired app name: ::
138
128
139
-
$ python manage.py startapp <name-of-the-app>
129
+
uv run python manage.py startapp <name-of-the-app>
140
130
141
131
#. **Move the app** to the Django Project Root, maintaining the project's two-tier structure: ::
142
132
143
-
$ mv <name-of-the-app> <django_project_root>/
133
+
mv <name-of-the-app> <django_project_root>/
144
134
145
135
#. **Edit the app's apps.py** change ``name = '<name-of-the-app>'`` to ``name = '<django_project_root>.<name-of-the-app>'``.
146
136
@@ -166,7 +156,7 @@ For instance, one of the packages we depend upon, ``django-allauth`` sends verif
166
156
167
157
#. Make it executable: ::
168
158
169
-
$ chmod +x mailpit
159
+
chmod +x mailpit
170
160
171
161
#. Spin up another terminal window and start it there: ::
172
162
@@ -199,18 +189,18 @@ If the project is configured to use Celery as a task scheduler then, by default,
199
189
200
190
Next, make sure `redis-server` is installed (per the `Getting started with Redis`_ guide) and run the server in one terminal::
201
191
202
-
$ redis-server
192
+
redis-server
203
193
204
194
Start the Celery worker by running the following command in another terminal::
205
195
206
-
$ celery -A config.celery_app worker --loglevel=info
196
+
uv run celery -A config.celery_app worker --loglevel=info
207
197
208
198
That Celery worker should be running whenever your app is running, typically as a background process,
209
199
so that it can pick up any tasks that get queued. Learn more from the `Celery Workers Guide`_.
210
200
211
201
The project comes with a simple task for manual testing purposes, inside `<project_slug>/users/tasks.py`. To queue that task locally, start the Django shell, import the task, and call `delay()` on it::
212
202
213
-
$ python manage.py shell
203
+
uv run python manage.py shell
214
204
>> from <project_slug>.users.tasks import get_users_count
215
205
>> get_users_count.delay()
216
206
@@ -231,11 +221,11 @@ If you've opted for Gulp or Webpack as front-end pipeline, the project comes con
231
221
#. Make sure that `Node.js`_ v18 is installed on your machine.
232
222
#. In the project root, install the JS dependencies with::
233
223
234
-
$ npm install
224
+
npm install
235
225
236
226
#. Now - with your virtualenv activated - start the application by running::
237
227
238
-
$ npm run dev
228
+
npm run dev
239
229
240
230
This will start 2 processes in parallel: the static assets build loop on one side, and the Django server on the other.
0 commit comments