|
15 | 15 | "source": [ |
16 | 16 | "This tutorial covers the format of the trajectory output exported by Parcels. **Parcels does not include advanced analysis or plotting functionality**, which users are suggested to write themselves to suit their research goals. Here we provide some starting points to explore the parcels output files yourself.\n", |
17 | 17 | "\n", |
18 | | - "- [**Reading the output file**](#Reading-the-output-file)\n", |
19 | | - "- [**Trajectory data structure**](#Trajectory-data-structure)\n", |
20 | | - "- [**Analysis**](#Analysis)\n", |
21 | | - "- [**Plotting**](#Plotting)\n", |
22 | | - "- [**Animations**](#Animations)\n", |
| 18 | + "- [**Reading the output file**](#reading-the-output-file)\n", |
| 19 | + "- [**Trajectory data structure**](#trajectory-data-structure)\n", |
| 20 | + "- [**Analysis**](#analysis)\n", |
| 21 | + "- [**Plotting**](#plotting)\n", |
| 22 | + "- [**Animations**](#animations)\n", |
23 | 23 | "\n", |
24 | 24 | "For more advanced reading and tutorials on the analysis of Lagrangian trajectories, we recommend checking out the [Lagrangian Diagnostics Analysis Cookbook](https://lagrangian-diags.readthedocs.io/en/latest/tutorials.html) and the project in general. The [TrajAn package](https://opendrift.github.io/trajan/index.html) can be used to read and plot datasets of Lagrangian trajectories." |
25 | 25 | ] |
|
56 | 56 | " \"CopernicusMarine_data_for_Argo_tutorial\"\n", |
57 | 57 | ")\n", |
58 | 58 | "\n", |
59 | | - "ds = xr.open_mfdataset(f\"{example_dataset_folder}/*.nc\", combine=\"by_coords\")\n", |
60 | | - "ds.load() # load the dataset into memory\n", |
| 59 | + "ds_fields = xr.open_mfdataset(f\"{example_dataset_folder}/*.nc\", combine=\"by_coords\")\n", |
| 60 | + "ds_fields.load() # load the dataset into memory\n", |
61 | 61 | "\n", |
62 | | - "fieldset = parcels.FieldSet.from_copernicusmarine(ds)" |
| 62 | + "fieldset = parcels.FieldSet.from_copernicusmarine(ds_fields)" |
63 | 63 | ] |
64 | 64 | }, |
65 | 65 | { |
|
72 | 72 | "npart = 10 # number of particles to be released\n", |
73 | 73 | "lon = 32 * np.ones(npart)\n", |
74 | 74 | "lat = np.linspace(-32.5, -30.5, npart, dtype=np.float32)\n", |
75 | | - "time = ds.time.values[0] + np.arange(0, npart) * np.timedelta64(2, \"h\")\n", |
76 | | - "z = np.repeat(ds.depth.values[0], npart)\n", |
| 75 | + "time = ds_fields.time.values[0] + np.arange(0, npart) * np.timedelta64(2, \"h\")\n", |
| 76 | + "z = np.repeat(ds_fields.depth.values[0], npart)\n", |
77 | 77 | "\n", |
78 | 78 | "pset = parcels.ParticleSet(\n", |
79 | 79 | " fieldset=fieldset, pclass=parcels.Particle, lon=lon, lat=lat, time=time, z=z\n", |
80 | 80 | ")\n", |
81 | 81 | "\n", |
82 | | - "output_file = parcels.ParticleFile(\"Output.zarr\", outputdt=np.timedelta64(2, \"h\"))" |
| 82 | + "output_file = parcels.ParticleFile(\"output.zarr\", outputdt=np.timedelta64(2, \"h\"))" |
83 | 83 | ] |
84 | 84 | }, |
85 | 85 | { |
|
109 | 109 | { |
110 | 110 | "cell_type": "code", |
111 | 111 | "execution_count": null, |
112 | | - "metadata": {}, |
| 112 | + "metadata": { |
| 113 | + "tags": [ |
| 114 | + "hide-output" |
| 115 | + ] |
| 116 | + }, |
113 | 117 | "outputs": [], |
114 | 118 | "source": [ |
115 | 119 | "pset.execute(\n", |
|
136 | 140 | "metadata": {}, |
137 | 141 | "outputs": [], |
138 | 142 | "source": [ |
139 | | - "data_xarray = xr.open_zarr(\"Output.zarr\")\n", |
| 143 | + "ds_particles = xr.open_zarr(\"output.zarr\")\n", |
140 | 144 | "\n", |
141 | | - "print(data_xarray)" |
| 145 | + "print(ds_particles)" |
142 | 146 | ] |
143 | 147 | }, |
144 | 148 | { |
|
171 | 175 | "source": [ |
172 | 176 | "np.set_printoptions(linewidth=160)\n", |
173 | 177 | "one_hour = np.timedelta64(1, \"h\") # Define timedelta object to help with conversion\n", |
174 | | - "time_from_start = data_xarray[\"time\"].values - fieldset.time_interval.left\n", |
| 178 | + "time_from_start = ds_particles[\"time\"].values - fieldset.time_interval.left\n", |
175 | 179 | "\n", |
176 | 180 | "print(time_from_start / one_hour) # timedelta / timedelta -> float number of hours" |
177 | 181 | ] |
|
202 | 206 | "source": [ |
203 | 207 | "import matplotlib.pyplot as plt\n", |
204 | 208 | "\n", |
205 | | - "x = data_xarray[\"lon\"].values\n", |
206 | | - "y = data_xarray[\"lat\"].values\n", |
| 209 | + "x = ds_particles[\"lon\"].values\n", |
| 210 | + "y = ds_particles[\"lat\"].values\n", |
207 | 211 | "distance = np.cumsum(\n", |
208 | 212 | " np.sqrt(np.square(np.diff(x)) + np.square(np.diff(y))), axis=1\n", |
209 | 213 | ") # d = (dx^2 + dy^2)^(1/2)\n", |
|
262 | 266 | "source": [ |
263 | 267 | "### Conditional selection\n", |
264 | 268 | "\n", |
265 | | - "In other cases, the processing of the data itself however depends on the absolute time at which the observations are made, e.g. studying seasonal phenomena. In that case the array structure is not as simple: the data must be selected by their `time` value. Here we show how the mean location of the particles evolves through time. This also requires the trajectory data to be aligned in time. The data are selected using `xr.DataArray.where()` which compares the time variable to a specific time. This type of selecting data with a condition (`data_xarray['time']==time`) is a powerful tool to analyze trajectory data.\n" |
| 269 | + "In other cases, the processing of the data itself however depends on the absolute time at which the observations are made, e.g. studying seasonal phenomena. In that case the array structure is not as simple: the data must be selected by their `time` value. Here we show how the mean location of the particles evolves through time. This also requires the trajectory data to be aligned in time. The data are selected using `xr.DataArray.where()` which compares the time variable to a specific time. This type of selecting data with a condition (`ds_particles['time']==time`) is a powerful tool to analyze trajectory data.\n" |
266 | 270 | ] |
267 | 271 | }, |
268 | 272 | { |
|
276 | 280 | "mean_lat_x = []\n", |
277 | 281 | "\n", |
278 | 282 | "timerange = np.arange(\n", |
279 | | - " np.nanmin(data_xarray[\"time\"].values),\n", |
280 | | - " np.nanmax(data_xarray[\"time\"].values) + np.timedelta64(timedelta(hours=2)),\n", |
| 283 | + " np.nanmin(ds_particles[\"time\"].values),\n", |
| 284 | + " np.nanmax(ds_particles[\"time\"].values) + np.timedelta64(timedelta(hours=2)),\n", |
281 | 285 | " timedelta(hours=2),\n", |
282 | 286 | ") # timerange in nanoseconds\n", |
283 | 287 | "\n", |
284 | 288 | "for time in timerange:\n", |
285 | 289 | " # if all trajectories share an observation at time\n", |
286 | | - " if np.all(np.any(data_xarray[\"time\"] == time, axis=1)):\n", |
| 290 | + " if np.all(np.any(ds_particles[\"time\"] == time, axis=1)):\n", |
287 | 291 | " # find the data that share the time\n", |
288 | 292 | " mean_lon_x += [\n", |
289 | | - " np.nanmean(data_xarray[\"lon\"].where(data_xarray[\"time\"] == time).values)\n", |
| 293 | + " np.nanmean(ds_particles[\"lon\"].where(ds_particles[\"time\"] == time).values)\n", |
290 | 294 | " ]\n", |
291 | 295 | " mean_lat_x += [\n", |
292 | | - " np.nanmean(data_xarray[\"lat\"].where(data_xarray[\"time\"] == time).values)\n", |
| 296 | + " np.nanmean(ds_particles[\"lat\"].where(ds_particles[\"time\"] == time).values)\n", |
293 | 297 | " ]" |
294 | 298 | ] |
295 | 299 | }, |
|
344 | 348 | "\n", |
345 | 349 | "###-Points-###\n", |
346 | 350 | "ax1.set_title(\"Points\")\n", |
347 | | - "ax1.scatter(data_xarray[\"lon\"].T, data_xarray[\"lat\"].T)\n", |
| 351 | + "ax1.scatter(ds_particles[\"lon\"].T, ds_particles[\"lat\"].T)\n", |
348 | 352 | "###-Lines-###\n", |
349 | 353 | "ax2.set_title(\"Lines\")\n", |
350 | | - "ax2.plot(data_xarray[\"lon\"].T, data_xarray[\"lat\"].T)\n", |
| 354 | + "ax2.plot(ds_particles[\"lon\"].T, ds_particles[\"lat\"].T)\n", |
351 | 355 | "###-Points + Lines-###\n", |
352 | 356 | "ax3.set_title(\"Points + Lines\")\n", |
353 | | - "ax3.plot(data_xarray[\"lon\"].T, data_xarray[\"lat\"].T, marker=\"o\")\n", |
| 357 | + "ax3.plot(ds_particles[\"lon\"].T, ds_particles[\"lat\"].T, marker=\"o\")\n", |
354 | 358 | "###-Not Transposed-###\n", |
355 | 359 | "ax4.set_title(\"Not transposed\")\n", |
356 | | - "ax4.plot(data_xarray[\"lon\"], data_xarray[\"lat\"], marker=\"o\")\n", |
| 360 | + "ax4.plot(ds_particles[\"lon\"], ds_particles[\"lat\"], marker=\"o\")\n", |
357 | 361 | "\n", |
358 | 362 | "plt.show()" |
359 | 363 | ] |
|
373 | 377 | "source": [ |
374 | 378 | "Trajectory plots like the ones above can become very cluttered for large sets of particles. To better see patterns, it's a good idea to create an animation in time and space. To do this, matplotlib offers an [animation package](https://matplotlib.org/stable/api/animation_api.html). Here we show how to use the [**FuncAnimation**](https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.animation.FuncAnimation.html#matplotlib.animation.FuncAnimation) class to animate parcels trajectory data, based on [this visualisation tutorial](https://github.com/Parcels-code/10year-anniversary-session5/blob/eaf7ac35f43c222280fa5577858be81dc346c06b/animations_tutorial.ipynb) from 10-years Parcels. \n", |
375 | 379 | "\n", |
376 | | - "To correctly reveal the patterns in time we must remember that the `obs` dimension does not necessarily correspond to the `time` variable ([see the section of Trajectory data structure above](#Trajectory-data-structure)). In the animation of the particles, we usually want to draw the points at each consecutive moment in time, not necessarily at each moment since the start of the trajectory. To do this we must [select the correct data](#Conditional-selection) in each rendering.\n" |
| 380 | + "To correctly reveal the patterns in time we must remember that the `obs` dimension does not necessarily correspond to the `time` variable ([see the section of Trajectory data structure above](#trajectory-data-structure)). In the animation of the particles, we usually want to draw the points at each consecutive moment in time, not necessarily at each moment since the start of the trajectory. To do this we must [select the correct data](#conditional-selection) in each rendering.\n" |
377 | 381 | ] |
378 | 382 | }, |
379 | 383 | { |
|
411 | 415 | "\n", |
412 | 416 | "# Set up the colors and associated trajectories:\n", |
413 | 417 | "# get release times for each particle (first valide obs for each trajectory)\n", |
414 | | - "release_times = data_xarray[\"time\"].min(dim=\"obs\", skipna=True).values\n", |
| 418 | + "release_times = ds_particles[\"time\"].min(dim=\"obs\", skipna=True).values\n", |
415 | 419 | "\n", |
416 | 420 | "# get unique release times and assign colors\n", |
417 | 421 | "unique_release_times = np.unique(release_times[~np.isnat(release_times)])\n", |
|
431 | 435 | "print(\"Pre-computing all particle positions...\")\n", |
432 | 436 | "all_particles_data = []\n", |
433 | 437 | "for i, target_time in enumerate(timerange):\n", |
434 | | - " time_id = np.where(data_xarray[\"time\"] == target_time)\n", |
435 | | - " lons = data_xarray[\"lon\"].values[time_id]\n", |
436 | | - " lats = data_xarray[\"lat\"].values[time_id]\n", |
| 438 | + " time_id = np.where(ds_particles[\"time\"] == target_time)\n", |
| 439 | + " lons = ds_particles[\"lon\"].values[time_id]\n", |
| 440 | + " lats = ds_particles[\"lat\"].values[time_id]\n", |
437 | 441 | " particle_indices = time_id[0]\n", |
438 | 442 | " valid = ~np.isnan(lons) & ~np.isnan(lats)\n", |
439 | 443 | "\n", |
|
0 commit comments