Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dpapp] Add flow in bmv2 pipeline #608

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

jimmyzhai
Copy link
Collaborator

Following dpapp HLD - #606, this is the 2nd part implementation. It updates bmv2 pipeline to have stateful packet processing.

@KrisNey-MSFT
Copy link
Collaborator

Hi @jimmyzhai - is this also ready to merge? Thank you!

@KrisNey-MSFT
Copy link
Collaborator

Hi @jimmyzhai - looks like we have conflicts here too...

@jimmyzhai jimmyzhai force-pushed the dp_app_add_flow_in_pipeline branch 3 times, most recently from 65e401d to 7a721be Compare August 29, 2024 02:57
dash-pipeline/bmv2/dash_headers.p4 Outdated Show resolved Hide resolved
const bit<16> PACKET_META_HDR_SIZE=dash_packet_meta_t.minSizeInBytes();

#define DASH_ETHTYPE 0x876d
#define DPAPP_MAC 0x02fe23f0e413
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will the MAC be changed on other people's machine?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a temporary hardcode. Mark FIXME comment to fix it later.

@@ -261,7 +140,7 @@ struct ha_data_t {
bit<16> dp_channel_src_port_max;

// HA packet/flow state
dash_ha_flow_sync_state_t flow_sync_state;
dash_flow_state_t flow_sync_state;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since you renamed the struct, will be better to also rename the field

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename it to ha_flow_state.

@@ -303,13 +182,18 @@ struct metadata_t {

// Flow data
conntrack_data_t conntrack_data;
flow_table_data_t flow_table;
dash_flow_state_t flow_state;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this duplicated w/ the one in ha_data_t?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is NO by now. Flow state machine and ha state machine are separated. We may refine it later.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this design is weird, what's their difference? if they are not the same, can we split them into 2 types?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if they are the same, then we should merge them into one.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged

packet.extract(hd.u0_ethernet);
transition select(hd.u0_ethernet.ether_type) {
IPV4_ETHTYPE: parse_u0_ipv4;
IPV6_ETHTYPE: parse_u0_ipv6;
DASH_ETHTYPE: parse_dash;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parse_dash -> parse_dash_hdr and etc.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

default: accept;
}
}

state parse_dash {
packet.extract(hd.packet_meta);
if (hd.packet_meta.packet_subtype != dash_packet_subtype_t.NONE) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will be better to check packet_subtype against the flow_xxx instead of NONE. it will be more robust and more readable.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

table acl_group {
/* This table API should be implemented manually using underlay SAI */
@SaiTable[ignored = "true"]
table appliance {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to sync to latest code. this is changed now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@@ -17,65 +17,94 @@
#include "stages/metering_update.p4"
#include "underlay.p4"

control dash_ingress(
control dash_inbound_routing_stage(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems to be doing similar things as this PR: #569. will need to resolve the conflicts...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merged PR #569.

as make p4c-dpdk-pna fails due to:

terminate called after throwing an instance of 'Util::CompilerBug'
  what():  In file: /p4c/backends/dpdk/dpdkArch.cpp:467
           Compiler Bug: header fields should be of type bit<> or
           varbit<>found this is_ipv6
overlay_rewrite_data_t -> meta_overlay_rewrite_data_t, as
header in struct is not well supported for target dpdk-pna.
@@ -134,14 +46,6 @@ enum bit<8> dash_eni_mac_type_t {
struct conntrack_data_t {
bool allow_in;
bool allow_out;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the following fields are flow related and intentionally being aggregated into this structure, because conntrack_data_t is used to hold all data that is related to flow.

and many of them are moved out from this structure and some fields seem to be missing, such as reverse_flow_key (it somehow doesn't show up in my search).

it doesn't feel to be right to me. this structure should not be changed. I can understand moving some of the data types as header, but the metadata should be remained - as they are different from header conceptually.

length = length + FLOW_DATA_HDR_SIZE;

if (meta.routing_actions & dash_routing_actions_t.STATIC_ENCAP != 0) {
#ifdef TARGET_DPDK_PNA
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to remove these ifdef?

meta.conntrack_data.flow_table.max_flow_count = max_flow_count;
meta.conntrack_data.flow_table.flow_enabled_key = dash_flow_enabled_key;
meta.conntrack_data.flow_table.flow_ttl_in_milliseconds = flow_ttl_in_milliseconds;
meta.flow_table.max_flow_count = max_flow_count;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mind to keep these fields in the conntrack_data, which makes the code easy the understand and won't complicates the metadata top-level structure - it is already a mess. : (

@@ -125,6 +119,12 @@ sai_apis:
type: sai_uint16_t
objects: null
valid_only: null
- !!python/object:utils.sai_spec.sai_struct_entry.SaiStructEntry
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow... the API changes in this file is not expected, many of them are order change. we need to get this fixed. this will break the existing flow API.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@jimmyzhai jimmyzhai force-pushed the dp_app_add_flow_in_pipeline branch 2 times, most recently from a3b9cf2 to 0045672 Compare September 9, 2024 17:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants