Skip to content

Commit b8f352e

Browse files
authored
Merge pull request #2363 from edwardhartnett/ejh_doc_2
fixed some doxygen warnings
2 parents 054c392 + c5ad809 commit b8f352e

4 files changed

Lines changed: 19 additions & 20 deletions

File tree

docs/dispatch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -499,7 +499,7 @@ The code in *hdf4var.c* does an *nc_get_vara()* on the HDF4 SD
499499
dataset. This is all that is needed for all the nc_get_* functions to
500500
work.
501501

502-
# Point of Contact {#filters_poc}
502+
# Point of Contact {#dispatch_poc}
503503

504504
*Author*: Dennis Heimbigner<br>
505505
*Email*: dmh at ucar dot edu<br>

docs/internal.md

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -59,8 +59,8 @@ term "deep" is also used to mean recursive.
5959

6060
Two deep walking operations are provided by the netcdf-c library
6161
to aid in managing instances of complex structures.
62-
* free'ing an instance of the complex type
63-
* copying an instance of the complex type.
62+
- free'ing an instance of the complex type
63+
- copying an instance of the complex type.
6464

6565
Previously The netcdf-c library only did shallow free and shallow copy of
6666
complex types. This meant that only the top level was properly
@@ -72,7 +72,7 @@ internally and the user's data.
7272
Note that the term "vector" is used to mean a contiguous (in
7373
memory) sequence of instances of some type. Given an array with,
7474
say, dimensions 2 X 3 X 4, this will be stored in memory as a
75-
vector of length 2*3*4=24 instances.
75+
vector of length 2\*3\*4=24 instances.
7676

7777
The use cases are primarily these.
7878

@@ -328,7 +328,7 @@ The model output is actually a struct containing two fields:
328328
## The Inference Algorithm
329329

330330
The construction of the model is primarily carried out by the function
331-
*NC\_infermodel()* (in *libdispatch/dinfermodel.c).
331+
*NC\_infermodel()* (in *libdispatch/dinfermodel.c*).
332332
It is given the following parameters:
333333
1. path -- (IN) absolute file path or URL
334334
2. modep -- (IN/OUT) the set of mode flags given to *NC\_open* or *NC\_create*.
@@ -493,7 +493,7 @@ int nc_inq_var_zstandard(int ncid, int varid, int* has_filterp, int* levelp);
493493
````
494494
So generally the API has the ncid and the varid as fixed, and then
495495
a list of parameters specific to the filter -- level in this case.
496-
For the inquire function, there is an additional argument -- has_filterp --
496+
For the inquire function, there is an additional argument -- has\_filterp --
497497
that is set to 1 if the filter is defined for the given variable
498498
and is 0 if not.
499499
The remainder of the inquiry parameters are pointers to memory
@@ -506,10 +506,10 @@ requires three supporting objects:
506506
libzstd must be installed in order to use the zstandard
507507
API.
508508
2. A HDF5 wrapper for the filter must be installed in the
509-
directory pointed to by the HDF5_PLUGIN_PATH environment
509+
directory pointed to by the HDF5\_PLUGIN\_PATH environment
510510
variable.
511511
3. (Optional) An NCZarr Codec implementation must be installed
512-
in the the HDF5_PLUGIN_PATH directory.
512+
in the the HDF5\_PLUGIN\_PATH directory.
513513

514514
## Adding a New Standard Filter
515515

@@ -521,7 +521,7 @@ of several locations.
521521
3. or it can be loaded as part of an external library such as libccr.
522522

523523
However, the three objects listed above need to be
524-
stored in the HDF5_PLUGIN_DIR directory, so adding a standard
524+
stored in the HDF5\_PLUGIN\_PATH directory, so adding a standard
525525
filter still requires modification to the netcdf build system.
526526
This limitation may be lifted in the future.
527527

@@ -543,7 +543,7 @@ fi
543543
AC_MSG_CHECKING([whether libzstd library is available])
544544
AC_MSG_RESULT([${have_zstd}])
545545
````
546-
Note the the entry point (*ZSTD_compress*) is library dependent
546+
Note the the entry point (*ZSTD\_compress*) is library dependent
547547
and is used to see if the library is available.
548548

549549
#### Makefile.am
@@ -558,22 +558,23 @@ libh5szip_la_SOURCES = H5Zzstd.c H5Zzstd.h
558558
endif
559559
````
560560

561+
````
561562
# Need our version of szip if libsz available and we are not using HDF5
562563
if HAVE_SZ
563564
noinst_LTLIBRARIES += libh5szip.la
564565
libh5szip_la_SOURCES = H5Zszip.c H5Zszip.h
565566
endif
566-
567+
````
567568
#### CMakeLists.txt
568569
In an analog to *configure.ac*, a block like
569570
this needs to be in *netcdf-c/CMakeLists.txt*.
570571
````
571572
FIND_PACKAGE(Zstd)
572573
set_std_filter(Zstd)
573574
````
574-
The FIND_PACKAGE requires a CMake module for the filter
575+
The FIND\_PACKAGE requires a CMake module for the filter
575576
in the cmake/modules directory.
576-
The *set_std_filter* function is a macro.
577+
The *set\_std\_filter* function is a macro.
577578

578579
An entry in the file config.h.cmake.in will also be needed.
579580
````

docs/nczarr.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -331,12 +331,12 @@ Amazon S3 accepts two forms for specifying the endpoint for accessing the data.
331331
1. Virtual -- the virtual addressing style places the bucket in the host part of a URL.
332332
For example:
333333
```
334-
https://<bucketname>.s2.<region>.amazonaws.com/
334+
https://<bucketname>.s2.&lt;region&gt.amazonaws.com/
335335
```
336336
2. Path -- the path addressing style places the bucket in at the front of the path part of a URL.
337337
For example:
338338
```
339-
https://s2.<region>.amazonaws.com/<bucketname>/
339+
https://s2.&lt;region&gt.amazonaws.com/<bucketname>/
340340
```
341341

342342
The NCZarr code will accept either form, although internally, it is standardized on path style.
@@ -470,7 +470,7 @@ Here are a couple of examples using the _ncgen_ and _ncdump_ utilities.
470470
```
471471
Note that the URLis internally translated to this
472472
````
473-
https://s2.<region>.amazonaws.com/datasetbucket/rootkey#mode=nczarr,awsprofile=unidata" dataset.cdl
473+
https://s2.&lt;region&gt.amazonaws.com/datasetbucket/rootkey#mode=nczarr,awsprofile=unidata" dataset.cdl
474474
````
475475
The region is from the algorithm described in Appendix E1.
476476
@@ -752,7 +752,7 @@ s3://<bucket>/key
752752
````
753753
Then this is rebuilt to this form:
754754
````
755-
s3://s2.<region>.amazonaws.com>/key
755+
s3://s2.&lt;region&gt.amazonaws.com>/key
756756
````
757757
However this requires figuring out the region to use.
758758
The algorithm for picking an region is as follows.
@@ -783,7 +783,7 @@ The content of these objects is the same as the contents of the corresponding ke
783783
* ''.nczarray <=> ''_NCZARR_ARRAY_''
784784
* ''.nczattr <=> ''_NCZARR_ATTR_''
785785
786-
# Appendix G. JSON Attribute Convention. {#nczarr_version1}
786+
# Appendix G. JSON Attribute Convention. {#nczarr_json}
787787
788788
An attribute may be encountered on read whose value when parsed
789789
by JSON is a dictionary. As a special conventions, the value

docs/notes.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@
44

55
<H2>See Also:</H2>
66

7-
* \subpage nc-error-codes
8-
97
# Ignored if NULL {#ignored_if_null}
108

119
Many of the argurments of netCDF functions are pointers. For example,

0 commit comments

Comments
 (0)