Skip to content

[Task Runner API] [Flower Interoperability] Integrate changes for flower-app-pytorch workspace's custom Collaborator into core component #1598

Merged
kminhta merged 11 commits intosecurefederatedai:developfrom
kminhta:integrate-flwr-collaborator-to-core
May 19, 2025
Merged

[Task Runner API] [Flower Interoperability] Integrate changes for flower-app-pytorch workspace's custom Collaborator into core component #1598
kminhta merged 11 commits intosecurefederatedai:developfrom
kminhta:integrate-flwr-collaborator-to-core

Conversation

@kminhta
Copy link
Copy Markdown
Collaborator

@kminhta kminhta commented May 8, 2025

Summary

Part 2/x of bringing flower-app-pytorch custom components back into core components

Type of Change (Mandatory)

Specify the type of change being made.

  • Feature enhancement

Description (Mandatory)

This PR:

  • removes src.collaborator.CollaboratorFlower and pulls changes back into openfl.component.Collaborator
  • modifies the task name from start_client_adapter to interop for better generalizability
  • adds prepare_interop_server() callback to Collaborator in order to minimize changes needed in do_task(). Originally, there was a separate conditional that started a separate set of tasks specific to interopability, which made it harder to read. Instead, we now pass the module to import (src.grpc.connector.flower.interop_server) via plan.task setting. This module is imported and initialized and sent to the Task Runner in kwargs at the beginning of the experiment
  • flower.interop_server arg is modified to accept a callable function (receive_message_from_interop(), which is defined in prepare_interop_server(). The reason for this change is that originally, we send the entire client to the interop_server() and the interop_server() called self.client.send_message_to_server() directly. It better encapsulates the client and the collaborator for the interop_server() to pass just the response to a callable function of the Collaborator

Testing

Manual testing in non-SGX and SGX

Signed-off-by: kta-intel <kevin.ta@intel.com>
@kminhta kminhta changed the title [WIP] [Task Runner API] [Flower Interoperability] Integrate changes for flower-app-pytorch workspace's custom Collaborator into core component [Task Runner API] [Flower Interoperability] Integrate changes for flower-app-pytorch workspace's custom Collaborator into core component May 13, 2025
@kminhta kminhta marked this pull request as ready for review May 13, 2025 20:31
Signed-off-by: kta-intel <kevin.ta@intel.com>
Copy link
Copy Markdown
Collaborator

@teoparvanov teoparvanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of thoughts from my first read-through:

Comment thread openfl-workspace/flower-app-pytorch/plan/plan.yaml Outdated
Comment thread openfl-workspace/flower-app-pytorch/src/runner.py Outdated
Comment thread openfl-workspace/flower-app-pytorch/src/runner.py Outdated
Comment thread openfl-workspace/flower-app-pytorch/src/runner.py Outdated
Comment thread openfl/component/collaborator/collaborator.py Outdated
kminhta added 5 commits May 14, 2025 08:04
… local_grpc_server to interop_server

Signed-off-by: kta-intel <kevin.ta@intel.com>
Signed-off-by: kta-intel <kevin.ta@intel.com>
Signed-off-by: kta-intel <kevin.ta@intel.com>
Signed-off-by: kta-intel <kevin.ta@intel.com>
Signed-off-by: kta-intel <kevin.ta@intel.com>
Copy link
Copy Markdown
Collaborator

@teoparvanov teoparvanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks for a crystal-clear implementation, @kta-intel !

Comment thread openfl/component/collaborator/collaborator.py Outdated
kminhta added 2 commits May 15, 2025 07:16
Signed-off-by: kta-intel <kevin.ta@intel.com>
Signed-off-by: kta-intel <kevin.ta@intel.com>
function : start_client_adapter
kwargs :
local_server_port : 0 # local grpc server, 0 to dynamically allocate
interop_server_port : 0 # interop server port, 0 to dynamically allocate
Copy link
Copy Markdown
Collaborator Author

@kminhta kminhta May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Working on addressing making dynamic allocation of gRPC ports consistent across OpenFL in #1599

The issue with this particular case is that for a local simulation, we need the collaborators to have different ports but hashing the plan would result in the same port which will lead to binding issues. I'm considering adding a local_simulation flag where the Collaborator can dynamically allocate based on its common name during the _prepare_interop_server - or the Task Runner can also handle this which may be a cleaner delineation from the Collaborator

It's the actually same reason for this messy handling of the client_port too: https://github.com/securefederatedai/openfl/blob/develop/openfl-workspace/flower-app-pytorch/src/runner.py#L51

@kminhta kminhta merged commit 6321951 into securefederatedai:develop May 19, 2025
72 of 73 checks passed
tayfunceylan pushed a commit to tayfunceylan/openfl that referenced this pull request May 23, 2025
…wer-app-pytorch workspace's custom Collaborator into core component (securefederatedai#1598)

* integrate flower collaborator into core collaborator

Signed-off-by: kta-intel <kevin.ta@intel.com>

* formatting

Signed-off-by: kta-intel <kevin.ta@intel.com>

* change task name interop -> prepare_for_interop, change references to local_grpc_server to interop_server

Signed-off-by: kta-intel <kevin.ta@intel.com>

* add prepare_interop_server as callback to run on_experiment_begin

Signed-off-by: kta-intel <kevin.ta@intel.com>

* fix typo

Signed-off-by: kta-intel <kevin.ta@intel.com>

* change to on_round_begin()

Signed-off-by: kta-intel <kevin.ta@intel.com>

* return empty dict

Signed-off-by: kta-intel <kevin.ta@intel.com>

* set self.prepare_interop_server() to run on experiment begin

Signed-off-by: kta-intel <kevin.ta@intel.com>

* remove self-explanatory comment, added another comment for clarity

Signed-off-by: kta-intel <kevin.ta@intel.com>

---------

Signed-off-by: kta-intel <kevin.ta@intel.com>
Signed-off-by: Tayfun Ceylan <tayfun.ceylan>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants