Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/deploy_beta_testing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
run: echo "war_file=$(ls *.war | head -1)">> $GITHUB_ENV

- name: Upload war artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: built-app
path: ./target/${{ env.war_file }}
Expand All @@ -50,7 +50,7 @@ jobs:
- uses: actions/checkout@v5

- name: Download war artifact
uses: actions/download-artifact@v5
uses: actions/download-artifact@v6
with:
name: built-app
path: ./
Expand Down
10 changes: 5 additions & 5 deletions .github/workflows/maven_unit_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:

# Upload the built war file. For download, it will be wrapped in a ZIP by GitHub.
# See also https://github.com/actions/upload-artifact#zipped-artifact-downloads
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v5
with:
name: dataverse-java${{ matrix.jdk }}.war
path: target/dataverse*.war
Expand All @@ -72,7 +72,7 @@ jobs:
- run: |
tar -cvf java-builddir.tar target
tar -cvf java-m2-selection.tar ~/.m2/repository/io/gdcc/dataverse-*
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v5
with:
name: java-artifacts
path: |
Expand Down Expand Up @@ -112,7 +112,7 @@ jobs:
cache: maven

# Get the build output from the unit test job
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
name: java-artifacts
- run: |
Expand All @@ -124,7 +124,7 @@ jobs:

# Wrap up and send to coverage job
- run: tar -cvf java-reportdir.tar target/site
- uses: actions/upload-artifact@v4
- uses: actions/upload-artifact@v5
with:
name: java-reportdir
path: java-reportdir.tar
Expand All @@ -145,7 +145,7 @@ jobs:
cache: maven

# Get the build output from the integration test job
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
name: java-reportdir
- run: tar -xvf java-reportdir.tar
Expand Down
8 changes: 4 additions & 4 deletions doc/sphinx-guides/source/api/native-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4208,24 +4208,24 @@ Delete files from a dataset. This API call allows you to delete multiple files f

curl -H "X-Dataverse-key:$API_TOKEN" -X PUT "$SERVER_URL/api/datasets/:persistentId/deleteFiles?persistentId=$PERSISTENT_IDENTIFIER" \
-H "Content-Type: application/json" \
-d '{"fileIds": [1, 2, 3]}'
-d '[1, 2, 3]'

The fully expanded example above (without environment variables) looks like this:

.. code-block:: bash

curl -H "X-Dataverse-key:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -X PUT "https://demo.dataverse.org/api/datasets/:persistentId/deleteFiles?persistentId=doi:10.5072/FK2ABCDEF" \
-H "Content-Type: application/json" \
-d '{"fileIds": [1, 2, 3]}'
-d '[1, 2, 3]'

The ``fileIds`` in the JSON payload should be an array of file IDs that you want to delete from the dataset.
The JSON payload should be an array of file IDs that you want to delete from the dataset.

You must have the appropriate permissions to delete files from the dataset.

Upon success, the API will return a JSON response with a success message and the number of files deleted.

The API call will report a 400 (BAD REQUEST) error if any of the files specified do not exist or are not in the latest version of the specified dataset.
The ``fileIds`` in the JSON payload should be an array of file IDs that you want to delete from the dataset.
The JSON payload should be an array of file IDs that you want to delete from the dataset.

.. _api-dataset-role-assignment-history:

Expand Down
4 changes: 3 additions & 1 deletion doc/sphinx-guides/source/user/dataset-management.rst
Original file line number Diff line number Diff line change
Expand Up @@ -704,7 +704,7 @@ If you have a Contributor role (can edit metadata, upload files, and edit files,
Preview URL to Review Unpublished Dataset
=========================================

Creating a Preview URL for a draft version of your dataset allows you to share your dataset (for viewing and downloading of files) before it is published to a wide group of individuals who may not have a user account on the Dataverse installation. Anyone you send the Preview URL to will not have to log into the Dataverse installation to view the unpublished dataset. Once a dataset has been published you may create new General Preview URLs for subsequent draft versions, but the Anonymous Preview URL will no longer be available.
Creating a Preview URL for a draft version of your dataset allows you to share your dataset (for viewing and downloading files, including :ref:`restricted <restricted-files>` and :ref:`embargoed <embargoes>` files) before it is published to a wide group of people who might not have a user account on the Dataverse installation. Anyone you send the Preview URL to will not have to log in to the Dataverse installation to view the unpublished dataset. Once a dataset has been published, you may create new General Preview URLs for subsequent draft versions, but the Anonymous Preview URL will no longer be available.

**Note:** To create a Preview URL, you must have the *ManageDatasetPermissions* permission for your draft dataset, usually given by the :ref:`roles <permissions>` *Curator* or *Administrator*.

Expand All @@ -726,6 +726,8 @@ To disable a Preview URL and to revoke access, follow the same steps as above un

Note that only one Preview URL (normal or with anonymized access) can be configured per dataset at a time.

.. _embargoes:

Embargoes
=========

Expand Down

This file was deleted.

2 changes: 1 addition & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
<jhove.version>1.20.1</jhove.version>
<poi.version>5.4.0</poi.version>
<tika.version>3.2.2</tika.version>
<netcdf.version>5.5.3</netcdf.version>
<netcdf.version>5.9.1</netcdf.version>

<openapi.infoTitle>Dataverse API</openapi.infoTitle>
<openapi.infoVersion>${project.version}</openapi.infoVersion>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
import java.util.logging.Logger;

import org.apache.commons.lang3.RandomStringUtils;
import com.beust.jcommander.Strings;

public abstract class AbstractPidProvider implements PidProvider {

Expand Down Expand Up @@ -577,8 +576,8 @@ public JsonObject getProviderSpecification() {
providerSpecification.add("shoulder", shoulder);
providerSpecification.add("identifierGenerationStyle", identifierGenerationStyle);
providerSpecification.add("datafilePidFormat", datafilePidFormat);
providerSpecification.add("managedSet", Strings.join(",", managedSet.toArray()));
providerSpecification.add("excludedSet", Strings.join(",", excludedSet.toArray()));
providerSpecification.add("managedSet", String.join(",", managedSet));
providerSpecification.add("excludedSet", String.join(",", excludedSet));
return providerSpecification.build();
}

Expand Down