Topics

OCF Native Cloud 2.0


Ondrej Tomcik
 

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 


Ondrej Tomcik
 

Hello Max,

Thanks for your message.

 

Please see my inline comments.

 

 

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 2:58 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter
Subject: Re: [dev] OCF Native Cloud 2.0

 

Hi Ondrej,

 

Thanks for sharing the design.

 

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?

Yes, this document is still technology agnostic. Soon we will introduce selected technology stack. Or let’s say roadmap for supported technologies.

Implementation is in the golang, but technologies like message broker / db / event store are being evaluated. But the goal is to not force users to use certain db or broker. It should be generic and user should be able to use what he prefers. Or use cloud native service.

 

 

I have 2 areas in the document I would like to understand better.

 

1. OCF CoAP Gateway

 

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

Yes. This allows you to introduce a new gateway, for example HTTP one, and guarantee interoperability within the Cloud across multiple different devices.

 

It will be helpful to get an overall picture of the "other" components.

Other components, or let’s talk about implementation: ResourceService, AuthorizationService (sample will be provided but should be user specific), ResourceShadowService and ResourceDirectoryService (these two might be just one service).

 

 

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.

ResourceService is stateless, might be probably deployed also as lambda function (evaluating). AuthorizationService is user specific, ResourceShadow and ResourceDirectory are the read side, they might use just in-mem db, during start filled from event-store.

 

2. Resource Shadow

 

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

I assume that by client you meant OCF Client. Yes, you’re right.

 

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Yes

 

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?

You mean online / offline? It will be stored, complete history is stored. Each Gateway, in this implementation OCF CoAP Gateway has to issue the command to ResourceAggregate (ResourcesService) to set the device online / offline. As it is aggregate, you have whole history what has happened. Each change to resource is persisted. Including device status – online/offline.

 

 

 

The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?

Of course, cloud to cloud, or let’s say you have cloud deployment, where one component is the OCF Native Cloud and another one is your set of product services. You are not communicating with the OCF Native Cloud through the CoAP over TCP. You’re issuing directly GRPC requests and including the oauth token. Please check sample usage : https://wiki.iotivity.org/coapnativecloud#sample_usage

 

 

 

 

Best regards

 

Max.

 

 

 


-- 

Max Kholmyansky

Software Architect - SURE Universal Ltd.

 

 

 

 

 

 

 

 

 

On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.


Ondrej Tomcik
 

Inline J

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 3:31 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: Re: [dev] OCF Native Cloud 2.0

 

Thanks, Ondrej.

 

Just to clarify what I meant by the "server state".

My question was not about the connectivity, but rather the actual state of the resources. 

Say, the "OCF Server" is a Light device. 

To know if the light is ON - I can query via GET.

 

I see

 

But I may also need to:

1. React on the server side on the change of the state (light ON / OFF) - without having an OCF client connected.

2. Keep the history of the state changes (for analytics or whatever)

 

Each change which occurs on the OCF Device side (ResourceChanged) is propagated to the Resource Aggregate (ResourceService). Resource Aggreagete will raise an event that resource was changed and store it to the event-store. That means that you have whole history what was changed during the time the device was online. ResourceShadow is listening on these events(ResourceRepresentationUpdated events) and building ResourceShadow viewmodel. You, if you are interested in this event, can of course subscribe as well and react to every ResourceRepresenationUpdated event. It’s the eventbus(kafka,rabbitmq,…) where every event is published to and whoever (internal component) can subscribe. OR OCF Client can subscribe through the GW, which is also from that moment listening on that specific topic.

Does it make sense?

 

The question is how I can solve those requirements.

 

Is there a productized interface to receive cross-account notifications on the resource state changes? 

 

 

 

Regards

Max.

 

On Thu, Aug 9, 2018 at 4:15 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Hello Max,

Thanks for your message.

 

Please see my inline comments.

 

 

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 2:58 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter
Subject: Re: [dev] OCF Native Cloud 2.0

 

Hi Ondrej,

 

Thanks for sharing the design.

 

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?

Yes, this document is still technology agnostic. Soon we will introduce selected technology stack. Or let’s say roadmap for supported technologies.

Implementation is in the golang, but technologies like message broker / db / event store are being evaluated. But the goal is to not force users to use certain db or broker. It should be generic and user should be able to use what he prefers. Or use cloud native service.

 

 

I have 2 areas in the document I would like to understand better.

 

1. OCF CoAP Gateway

 

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

Yes. This allows you to introduce a new gateway, for example HTTP one, and guarantee interoperability within the Cloud across multiple different devices.

 

It will be helpful to get an overall picture of the "other" components.

Other components, or let’s talk about implementation: ResourceService, AuthorizationService (sample will be provided but should be user specific), ResourceShadowService and ResourceDirectoryService (these two might be just one service).

 

 

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.

ResourceService is stateless, might be probably deployed also as lambda function (evaluating). AuthorizationService is user specific, ResourceShadow and ResourceDirectory are the read side, they might use just in-mem db, during start filled from event-store.

 

2. Resource Shadow

 

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

I assume that by client you meant OCF Client. Yes, you’re right.

 

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Yes

 

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?

You mean online / offline? It will be stored, complete history is stored. Each Gateway, in this implementation OCF CoAP Gateway has to issue the command to ResourceAggregate (ResourcesService) to set the device online / offline. As it is aggregate, you have whole history what has happened. Each change to resource is persisted. Including device status – online/offline.

 

 

 

The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?

Of course, cloud to cloud, or let’s say you have cloud deployment, where one component is the OCF Native Cloud and another one is your set of product services. You are not communicating with the OCF Native Cloud through the CoAP over TCP. You’re issuing directly GRPC requests and including the oauth token. Please check sample usage : https://wiki.iotivity.org/coapnativecloud#sample_usage

 

 

 

 

Best regards

 

Max.

 

 

 


-- 

Max Kholmyansky

Software Architect - SURE Universal Ltd.

 

 

 

 

 

 

 

 

 

On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.


Max Kholmyansky <max@...>
 

Hi Ondrej,

Thanks for sharing the design.

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?


I have 2 areas in the document I would like to understand better.

1. OCF CoAP Gateway

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

It will be helpful to get an overall picture of the "other" components.

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.


2. Resource Shadow

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?



The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?


Best regards

Max.




-- 
Max Kholmyansky
Software Architect - SURE Universal Ltd.









On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 




--
Max Kholmyansky
Software Architect - SURE Universal Ltd.


Max Kholmyansky <max@...>
 

Thanks, Ondrej.

Just to clarify what I meant by the "server state".
My question was not about the connectivity, but rather the actual state of the resources. 
Say, the "OCF Server" is a Light device. 
To know if the light is ON - I can query via GET.

But I may also need to:
1. React on the server side on the change of the state (light ON / OFF) - without having an OCF client connected.
2. Keep the history of the state changes (for analytics or whatever)

The question is how I can solve those requirements.

Is there a productized interface to receive cross-account notifications on the resource state changes? 



Regards
Max.

On Thu, Aug 9, 2018 at 4:15 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Hello Max,

Thanks for your message.

 

Please see my inline comments.

 

 

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 2:58 PM
To: Tomcik Ondrej
Cc: iotivity-dev@....org; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter
Subject: Re: [dev] OCF Native Cloud 2.0

 

Hi Ondrej,

 

Thanks for sharing the design.

 

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?

Yes, this document is still technology agnostic. Soon we will introduce selected technology stack. Or let’s say roadmap for supported technologies.

Implementation is in the golang, but technologies like message broker / db / event store are being evaluated. But the goal is to not force users to use certain db or broker. It should be generic and user should be able to use what he prefers. Or use cloud native service.

 

 

I have 2 areas in the document I would like to understand better.

 

1. OCF CoAP Gateway

 

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

Yes. This allows you to introduce a new gateway, for example HTTP one, and guarantee interoperability within the Cloud across multiple different devices.

 

It will be helpful to get an overall picture of the "other" components.

Other components, or let’s talk about implementation: ResourceService, AuthorizationService (sample will be provided but should be user specific), ResourceShadowService and ResourceDirectoryService (these two might be just one service).

 

 

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.

ResourceService is stateless, might be probably deployed also as lambda function (evaluating). AuthorizationService is user specific, ResourceShadow and ResourceDirectory are the read side, they might use just in-mem db, during start filled from event-store.

 

2. Resource Shadow

 

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

I assume that by client you meant OCF Client. Yes, you’re right.

 

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Yes

 

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?

You mean online / offline? It will be stored, complete history is stored. Each Gateway, in this implementation OCF CoAP Gateway has to issue the command to ResourceAggregate (ResourcesService) to set the device online / offline. As it is aggregate, you have whole history what has happened. Each change to resource is persisted. Including device status – online/offline.

 

 

 

The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?

Of course, cloud to cloud, or let’s say you have cloud deployment, where one component is the OCF Native Cloud and another one is your set of product services. You are not communicating with the OCF Native Cloud through the CoAP over TCP. You’re issuing directly GRPC requests and including the oauth token. Please check sample usage : https://wiki.iotivity.org/coapnativecloud#sample_usage

 

 

 

 

Best regards

 

Max.

 

 

 


-- 

Max Kholmyansky

Software Architect - SURE Universal Ltd.

 

 

 

 

 

 

 

 

 

On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.




--
Max Kholmyansky
Software Architect - SURE Universal Ltd.


Scott King <Scott.King@...>
 

Ondrej,

 

First off, congrats on publishing such an extensive document!

 

·         Maybe I’m not looking in the right place, but I’m not seeing much explanation for how this architecture optimizes for making it easy to integrate OCF cloud messaging into existing infrastructure/architecture (especially for amazon/google/IBM/azure to offer it as part of their current IoT managed services).

·         You state that L7 load balancing is an option for CoAP. It was my understanding that no load balancers support L7 load balancing with CoAP. Don’t you also need to stick to L4 because the OCF device relies on a long-lived connection? I could be wrong, so let me know.

·         I’m concerned that ES/pubsub aren’t preferable over point-to-point HTTP/gRPC communication for some of the use cases in your diagrams. For example, if the device is trying to sign in to a coap gateway, shouldn’t the auth service give a response to the OCF gateway’s token validation request rather than publishing an event itself? Can you help me better understand who else needs to be immediately notified of a successful login other than the gateway?

o   How many pubsub channels are required per device in order to implement your architecture?

o   Would we benefit from an in-memory DB like redis to handle persisting device shadow and device presence/login status?

·         Given the importance of alexa/google assistant functionality for commercial adoption, I would hope that we can work together to ensure workflow compatibility and develop examples for this feature

·         Can you confirm that you plan to automatically observe all resources that get published to the cloud?

 

I feel like we need to make a stronger distinction between the minimum feature set to satisfy the OCF spec and the additional features that we all want that’s out of spec, like device shadow. Can you confirm whether this architectural proposal means that you aren’t interested in the gRPC API that I proposed?

 

Regards,
Scott

 

 

From: Ondrej Tomcik [mailto:Ondrej.Tomcik@...]
Sent: Thursday, August 9, 2018 9:38 AM
To: Max Kholmyansky <max@...>
Cc: iotivity-dev@...; Scott King <Scott.King@...>; Gregg Reynolds (dev@...) <dev@...>; Jozef Kralik <Jozef.Kralik@...>; Peter Rafaj <Peter.Rafaj@...>; JinHyeock Choi (jinchoe@...) <jinchoe@...>
Subject: RE: [dev] OCF Native Cloud 2.0

 

Inline J

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 3:31 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: Re: [dev] OCF Native Cloud 2.0

 

Thanks, Ondrej.

 

Just to clarify what I meant by the "server state".

My question was not about the connectivity, but rather the actual state of the resources. 

Say, the "OCF Server" is a Light device. 

To know if the light is ON - I can query via GET.

 

I see

 

But I may also need to:

1. React on the server side on the change of the state (light ON / OFF) - without having an OCF client connected.

2. Keep the history of the state changes (for analytics or whatever)

 

Each change which occurs on the OCF Device side (ResourceChanged) is propagated to the Resource Aggregate (ResourceService). Resource Aggreagete will raise an event that resource was changed and store it to the event-store. That means that you have whole history what was changed during the time the device was online. ResourceShadow is listening on these events(ResourceRepresentationUpdated events) and building ResourceShadow viewmodel. You, if you are interested in this event, can of course subscribe as well and react to every ResourceRepresenationUpdated event. It’s the eventbus(kafka,rabbitmq,…) where every event is published to and whoever (internal component) can subscribe. OR OCF Client can subscribe through the GW, which is also from that moment listening on that specific topic.

Does it make sense?

 

The question is how I can solve those requirements.

 

Is there a productized interface to receive cross-account notifications on the resource state changes? 

 

 

 

Regards

Max.

 

On Thu, Aug 9, 2018 at 4:15 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Hello Max,

Thanks for your message.

 

Please see my inline comments.

 

 

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 2:58 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter
Subject: Re: [dev] OCF Native Cloud 2.0

 

Hi Ondrej,

 

Thanks for sharing the design.

 

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?

Yes, this document is still technology agnostic. Soon we will introduce selected technology stack. Or let’s say roadmap for supported technologies.

Implementation is in the golang, but technologies like message broker / db / event store are being evaluated. But the goal is to not force users to use certain db or broker. It should be generic and user should be able to use what he prefers. Or use cloud native service.

 

 

I have 2 areas in the document I would like to understand better.

 

1. OCF CoAP Gateway

 

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

Yes. This allows you to introduce a new gateway, for example HTTP one, and guarantee interoperability within the Cloud across multiple different devices.

 

It will be helpful to get an overall picture of the "other" components.

Other components, or let’s talk about implementation: ResourceService, AuthorizationService (sample will be provided but should be user specific), ResourceShadowService and ResourceDirectoryService (these two might be just one service).

 

 

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.

ResourceService is stateless, might be probably deployed also as lambda function (evaluating). AuthorizationService is user specific, ResourceShadow and ResourceDirectory are the read side, they might use just in-mem db, during start filled from event-store.

 

2. Resource Shadow

 

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

I assume that by client you meant OCF Client. Yes, you’re right.

 

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Yes

 

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?

You mean online / offline? It will be stored, complete history is stored. Each Gateway, in this implementation OCF CoAP Gateway has to issue the command to ResourceAggregate (ResourcesService) to set the device online / offline. As it is aggregate, you have whole history what has happened. Each change to resource is persisted. Including device status – online/offline.

 

 

 

The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?

Of course, cloud to cloud, or let’s say you have cloud deployment, where one component is the OCF Native Cloud and another one is your set of product services. You are not communicating with the OCF Native Cloud through the CoAP over TCP. You’re issuing directly GRPC requests and including the oauth token. Please check sample usage : https://wiki.iotivity.org/coapnativecloud#sample_usage

 

 

 

 

Best regards

 

Max.

 

 

 


-- 

Max Kholmyansky

Software Architect - SURE Universal Ltd.

 

 

 

 

 

 

 

 

 

On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.


Gregg Reynolds
 



On Thu, Aug 9, 2018 at 6:48 AM, Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 


Obviously you put a lot of  work into this, thanks.

How does it handle third-party users?  For example, Mom, Dad, kids, relatives, guests, all have different permissions, dynamically configurable.

Gregg


Ondrej Tomcik
 

Hello Scott!

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Scott King [mailto:Scott.King@...]
Sent: Thursday, August 9, 2018 4:40 PM
To: Tomcik Ondrej; Max Kholmyansky
Cc: iotivity-dev@...; Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: RE: [dev] OCF Native Cloud 2.0

 

Ondrej,

 

First off, congrats on publishing such an extensive document!

 

·         Maybe I’m not looking in the right place, but I’m not seeing much explanation for how this architecture optimizes for making it easy to integrate OCF cloud messaging into existing infrastructure/architecture (especially for amazon/google/IBM/azure to offer it as part of their current IoT managed services).

This will be part of implementation. Published document is not limiting you in this area, but does not describe how to achieve it. It’s implementation “detail”.

 

·         You state that L7 load balancing is an option for CoAP. It was my understanding that no load balancers support L7 load balancing with CoAP. Don’t you also need to stick to L4 because the OCF device relies on a long-lived connection? I could be wrong, so let me know.

Good point. I didn’t investigate If L7 load balancing for the CoAP exists. I mentioned it because it is an option, as the CoAP is very similar to the HTTP and it can be implemented.

And regarding long lived tcp connections, I am not sure. Why you couldn’t have open TCP connection to the L7 load balander, and distribute requests to other components based on CoAP data? I might be missing something.

 

·         I’m concerned that ES/pubsub aren’t preferable over point-to-point HTTP/gRPC communication for some of the use cases in your diagrams. For example, if the device is trying to sign in to a coap gateway, shouldn’t the auth service give a response to the OCF gateway’s token validation request rather than publishing an event itself? Can you help me better understand who else needs to be immediately notified of a successful login other than the gateway?

                EventSourcing and gRPC does not fit together. CQRS and gRPC yes. Where you have events, you have the Event Bus. For example Kafka + protobuf. Where you have  commands, gRPC might be a solution, or again EventBus used as a Command Queue. The response for the sign-in is in form of an event just because of non-blocking communication. Whole communication in the OCF Native Cloud is non-blocking. So the OCF CoAP Gateway will issue a command to the AuthorizeService to verify sign-in token and not wait for the response. It may take some time, it may introduce delay in whole system, block the gateway. Therefore, the OCF CoAP Gateway is listening on the events (SignedIn), map it with the issued request and reply to the device. It’s also scalable, you can have scaled AuthorizationService and issue SignIn command to the CommandQueue. Mostly available AuthorizationService will take it from the queue, process and raise an event that it was processed. So, it’s not about “who else needs to be immediately notified” but about non-blocking communication and scalability.

·         How many pubsub channels are required per device in order to implement your architecture?

·         I didn’t defined yet organization of channels, but usually, channel per event type.

·         Would we benefit from an in-memory DB like redis to handle persisting device shadow and device presence/login status?

·         You don’t need redis at all. Resource shadow is stored as a series of ResourceRepresentationUpdated events in the event store. When the ResourceShadow service is loaded, it will just load these events for every resource and subscribe to this event. So the resource shadow is updated immediately when such an event occurs. You can restart It or scale it, It will again load everything + subscribe. In-memory db is enough.

·         Given the importance of alexa/google assistant functionality for commercial adoption, I would hope that we can work together to ensure workflow compatibility and develop examples for this feature

Sure

·         Can you confirm that you plan to automatically observe all resources that get published to the cloud?

Confirmed

 

I feel like we need to make a stronger distinction between the minimum feature set to satisfy the OCF spec and the additional features that we all want that’s out of spec, like device shadow. Can you confirm whether this architectural proposal means that you aren’t interested in the gRPC API that I proposed?

Proposed protobuf spec can be used. But just for commands.

 

Regards,
Scott

 

 

From: Ondrej Tomcik [mailto:Ondrej.Tomcik@...]
Sent: Thursday, August 9, 2018 9:38 AM
To: Max Kholmyansky <max@...>
Cc: iotivity-dev@...; Scott King <Scott.King@...>; Gregg Reynolds (dev@...) <dev@...>; Jozef Kralik <Jozef.Kralik@...>; Peter Rafaj <Peter.Rafaj@...>; JinHyeock Choi (jinchoe@...) <jinchoe@...>
Subject: RE: [dev] OCF Native Cloud 2.0

 

Inline J

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 3:31 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: Re: [dev] OCF Native Cloud 2.0

 

Thanks, Ondrej.

 

Just to clarify what I meant by the "server state".

My question was not about the connectivity, but rather the actual state of the resources. 

Say, the "OCF Server" is a Light device. 

To know if the light is ON - I can query via GET.

 

I see

 

But I may also need to:

1. React on the server side on the change of the state (light ON / OFF) - without having an OCF client connected.

2. Keep the history of the state changes (for analytics or whatever)

 

Each change which occurs on the OCF Device side (ResourceChanged) is propagated to the Resource Aggregate (ResourceService). Resource Aggreagete will raise an event that resource was changed and store it to the event-store. That means that you have whole history what was changed during the time the device was online. ResourceShadow is listening on these events(ResourceRepresentationUpdated events) and building ResourceShadow viewmodel. You, if you are interested in this event, can of course subscribe as well and react to every ResourceRepresenationUpdated event. It’s the eventbus(kafka,rabbitmq,…) where every event is published to and whoever (internal component) can subscribe. OR OCF Client can subscribe through the GW, which is also from that moment listening on that specific topic.

Does it make sense?

 

The question is how I can solve those requirements.

 

Is there a productized interface to receive cross-account notifications on the resource state changes? 

 

 

 

Regards

Max.

 

On Thu, Aug 9, 2018 at 4:15 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Hello Max,

Thanks for your message.

 

Please see my inline comments.

 

 

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 2:58 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter
Subject: Re: [dev] OCF Native Cloud 2.0

 

Hi Ondrej,

 

Thanks for sharing the design.

 

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?

Yes, this document is still technology agnostic. Soon we will introduce selected technology stack. Or let’s say roadmap for supported technologies.

Implementation is in the golang, but technologies like message broker / db / event store are being evaluated. But the goal is to not force users to use certain db or broker. It should be generic and user should be able to use what he prefers. Or use cloud native service.

 

 

I have 2 areas in the document I would like to understand better.

 

1. OCF CoAP Gateway

 

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

Yes. This allows you to introduce a new gateway, for example HTTP one, and guarantee interoperability within the Cloud across multiple different devices.

 

It will be helpful to get an overall picture of the "other" components.

Other components, or let’s talk about implementation: ResourceService, AuthorizationService (sample will be provided but should be user specific), ResourceShadowService and ResourceDirectoryService (these two might be just one service).

 

 

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.

ResourceService is stateless, might be probably deployed also as lambda function (evaluating). AuthorizationService is user specific, ResourceShadow and ResourceDirectory are the read side, they might use just in-mem db, during start filled from event-store.

 

2. Resource Shadow

 

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

I assume that by client you meant OCF Client. Yes, you’re right.

 

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Yes

 

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?

You mean online / offline? It will be stored, complete history is stored. Each Gateway, in this implementation OCF CoAP Gateway has to issue the command to ResourceAggregate (ResourcesService) to set the device online / offline. As it is aggregate, you have whole history what has happened. Each change to resource is persisted. Including device status – online/offline.

 

 

 

The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?

Of course, cloud to cloud, or let’s say you have cloud deployment, where one component is the OCF Native Cloud and another one is your set of product services. You are not communicating with the OCF Native Cloud through the CoAP over TCP. You’re issuing directly GRPC requests and including the oauth token. Please check sample usage : https://wiki.iotivity.org/coapnativecloud#sample_usage

 

 

 

 

Best regards

 

Max.

 

 

 


-- 

Max Kholmyansky

Software Architect - SURE Universal Ltd.

 

 

 

 

 

 

 

 

 

On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.


Ondrej Tomcik
 

Hello Greg,

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Gregg Reynolds [mailto:dev@...]
Sent: Thursday, August 9, 2018 5:45 PM
To: Tomcik Ondrej
Cc: iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Kralik Jozef; Rafaj Peter
Subject: Re: OCF Native Cloud 2.0

 

 

 

On Thu, Aug 9, 2018 at 6:48 AM, Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

 

Obviously you put a lot of  work into this, thanks.

 

How does it handle third-party users?  For example, Mom, Dad, kids, relatives, guests, all have different permissions, dynamically configurable.

Current IoTivity implementation force you to use very specific user/device management model. What is bad.

In this concept, AuthorizationService implementation is completely up to the user – user is the company who want to use the OCF Native Cloud. Of course we will provide sample AuthorizationService which communicates with the GitHub, but this one will be not used for production – I believe J. If you’re interested in multiple owners of the device, share the device with friends, and so on, you have to model this structure of users and management on yur own. OCF Native Cloud just defines contract, how it is communicating with the AuthorizationService. Meaning, the OCF Native Cloud will ask AuthorizationService, if the pending request  (resource uri + device + token) is authorized or not. Token identifies the user who issued the request and together with resource uri + device id, AuthorizationService can clearly answer if the request is authorized or not.

 

AuthorizationService should also emit events (changes which SID (user identifier – subject id) has access to which DID (DeviceId)), which are cached by the OCF Native Cloud so each user request is not triggering request to the AuthorizationService.

 

Make sense? You can see it here:

https://wiki.iotivity.org/_media/auth_1.png?cache=&w=900&h=699&tok=413462

Or read whole Authorization Bounded Context - https://wiki.iotivity.org/coapnativecloud#authorization_bounded_context

 

And, important is this note:

Each request session must be backed by an access token, so the OCF Native Cloud can authorize that request. In case of the OCF Servers / Clients, a TCP session must be backed by the access token and validated through the sign-up process. Each command issued by the OCF CoAP Gateway is then backed by the validated token.

 

Gregg

 


Scott King <Scott.King@...>
 

Gregg,

 

I can only talk to the spec, but I didn’t see anything in the spec that supported different “users” (aka other humans that have been provisioned mediator or client tokens by the “main” user) of the same device group to have different permissions. From the perspective of the device, all requests appear to come from the cloud, so if you need to handle finer granularity access control then it’d need to be a “not in the spec” feature in the cloud codebase (IIRC Samsung did this with their java implementation). I’m personally a CNCF fanboy so I’d recommend we check out OPA, but I don’t know what the priority or pre-existing strategy for implementing that feature is.

 

From: Gregg Reynolds [mailto:dev@...]
Sent: Thursday, August 9, 2018 11:45 AM
To: Ondrej Tomcik <Ondrej.Tomcik@...>
Cc: iotivity-dev@...; Scott King <Scott.King@...>; Max Kholmyansky (max@...) <max@...>; Jozef Kralik <Jozef.Kralik@...>; Peter Rafaj <Peter.Rafaj@...>
Subject: Re: OCF Native Cloud 2.0

 

 

 

On Thu, Aug 9, 2018 at 6:48 AM, Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

 

Obviously you put a lot of  work into this, thanks.

 

How does it handle third-party users?  For example, Mom, Dad, kids, relatives, guests, all have different permissions, dynamically configurable.

 

Gregg

 


Gregg Reynolds
 



On Thu, Aug 9, 2018 at 11:20 AM, Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Hello Greg,

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Gregg Reynolds [mailto:dev@...]
Sent: Thursday, August 9, 2018 5:45 PM
To: Tomcik Ondrej
Cc: iotivity-dev@....org; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Kralik Jozef; Rafaj Peter
Subject: Re: OCF Native Cloud 2.0

 

 

 

On Thu, Aug 9, 2018 at 6:48 AM, Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

 

Obviously you put a lot of  work into this, thanks.

 

How does it handle third-party users?  For example, Mom, Dad, kids, relatives, guests, all have different permissions, dynamically configurable.

Current IoTivity implementation force you to use very specific user/device management model. What is bad.


What is bad about it?  To me the OCF security model boils down to resources, creds, ACLs, and the services need to enforce policies (auth/authz). I guess you are talking only about the Iotivity implementation, not the security model?

If the implementation is bad we should look at improving it, no?

G

 

In this concept, AuthorizationService implementation is completely up to the user – user is the company who want to use the OCF Native Cloud. Of course we will provide sample AuthorizationService which communicates with the GitHub, but this one will be not used for production – I believe J. If you’re interested in multiple owners of the device, share the device with friends, and so on, you have to model this structure of users and management on yur own. OCF Native Cloud just defines contract, how it is communicating with the AuthorizationService. Meaning, the OCF Native Cloud will ask AuthorizationService, if the pending request  (resource uri + device + token) is authorized or not. Token identifies the user who issued the request and together with resource uri + device id, AuthorizationService can clearly answer if the request is authorized or not.

 

AuthorizationService should also emit events (changes which SID (user identifier – subject id) has access to which DID (DeviceId)), which are cached by the OCF Native Cloud so each user request is not triggering request to the AuthorizationService.

 

Make sense? You can see it here:

https://wiki.iotivity.org/_media/auth_1.png?cache=&w=900&h=699&tok=413462

Or read whole Authorization Bounded Context - https://wiki.iotivity.org/coapnativecloud#authorization_bounded_context

 

And, important is this note:

Each request session must be backed by an access token, so the OCF Native Cloud can authorize that request. In case of the OCF Servers / Clients, a TCP session must be backed by the access token and validated through the sign-up process. Each command issued by the OCF CoAP Gateway is then backed by the validated token.

 

Gregg

 



Gregg Reynolds
 



On Thu, Aug 9, 2018, 6:48 AM Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

"Resource aggregate", "Resource bounded context" - huh?

Strongly recommend you translate to plain language. I don't know what it means (and I really really do not want to have to master any more buzz phrases.)

G

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 


Ondrej Tomcik
 


"Resource aggregate", "Resource bounded context" - huh?

Strongly recommend you translate to plain language. I don't know what it means (and I really really do not want to have to master any more buzz phrases.)
It's not a buzz word. It has it's meaning and transforming it to something else does not make sense. It's from 2004. Sorry. One who wants and needs to understand it will get it. It's in the context of Domain Driven Design and Event Sourcing. As this is driving the OCF Native Cloud, I don't see the point of changing it to something else.

Also if you don't understand these terms, you should get the point. I double-checked it with people who didn't know what does it mean.

Ondrej

G

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 


Gregg Reynolds
 



On Thu, Aug 9, 2018, 6:48 AM Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Btw, I applaud your efforts, please take my feedback positively.

Think you need to align some language with the OAuth spec.  E.g. "authorization code" is a technical term with specific semantics in OAuth 2, not sure how you're using it.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 


Ondrej Tomcik
 

On Thu, Aug 9, 2018, 6:48 AM Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Btw, I applaud your efforts, please take my feedback positively.

Think you need to align some language with the OAuth spec.  E.g. "authorization code" is a technical term with specific semantics in OAuth 2, not sure how you're using it.
I am referring exactly to the "authorization code" from the OAuth2 terminology.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 


Gregg Reynolds
 



On Thu, Aug 9, 2018, 1:25 PM Ondrej Tomcik <Ondrej.Tomcik@...> wrote:
On Thu, Aug 9, 2018, 6:48 AM Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Btw, I applaud your efforts, please take my feedback positively.

Think you need to align some language with the OAuth spec.  E.g. "authorization code" is a technical term with specific semantics in OAuth 2, not sure how you're using it.
I am referring exactly to the "authorization code" from the OAuth2 terminology.

Ok. The implication is that that is the only supported grant type. Is that the case?

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 


Ondrej Tomcik
 




On Thu, Aug 9, 2018, 1:25 PM Ondrej Tomcik <Ondrej.Tomcik@...> wrote:
On Thu, Aug 9, 2018, 6:48 AM Ondrej Tomcik <Ondrej.Tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Btw, I applaud your efforts, please take my feedback positively.

Think you need to align some language with the OAuth spec.  E.g. "authorization code" is a technical term with specific semantics in OAuth 2, not sure how you're using it.
I am referring exactly to the "authorization code" from the OAuth2 terminology.

Ok. The implication is that that is the only supported grant type. Is that the case?
It is the only supported grant type for the device onboarding - sign up. It's in the specification, but they didn't explicitly say that it's the OAuth2.

  1. User will ask the OAuth Server to issue the authorization code
  2. Authorization code is through the provisioning tool set to the cloud resource on the device, together with the IP, etc.
  3. Device will connect and proceed with sign-up -> send the authorization code to the OCF Native Cloud
  4. OCF Native Cloud will exchange authorization code for the access token (with OAuth Server)
  5. Access token is returned to the device and the device is authorized
For this process, authorization code grant type is required. 

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 


Ondrej Tomcik
 



From: Scott King [Scott.King@...]
Sent: 09 August 2018 22:42
To: Tomcik Ondrej; Max Kholmyansky
Cc: iotivity-dev@...; Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: RE: [dev] OCF Native Cloud 2.0

I have a tough time reading inline comments. I hope this is acceptable format.

 

·         This will be part of implementation. Published document is not limiting you in this area, but does not describe how to achieve it. It’s implementation “detail”.

o   If you want multiple backend implementations for a given interface like the OCF cloud, then you need to make things very easy and simple. I would assert that any implementation details “behind” the interface (like CQRS architecture) should be kept in the github repo. The wiki shouldn’t be targeting devs who are working on your codebase, it should be targeting devs who want to use your codebase in production.

Come on :) Wiki can be place both for developers and for users. Important is how you will organize it there, so everybody will find what he is searching for.  

·         L7 load balancing

o   If you want to add coap functionality to a popular LB like nginx or envoy (preferably envoy because of CNCF membership and no “enterprise” tier) then we should discuss that. It would be a great contribution to the ecosystem. I don’t see why you couldn’t implement L7 routing as long as the LB maintained the long lived connection instead of the OCF interface (you’d need to persist state of the device, like being logged in, somewhere though. Maybe a redis db?)

L7 might be next step after working OCF Native Cloud. We can discuss it. Redis is not needed, state of the device is already persisted in the eventstore.

·         ES/gRPC

o   Golang can use a gRPC API in a non-blocking manner via goroutines. I think you have a good point, but just didn’t explain it well J

Sure, but that was not the only reason. :) I will try to explain it in second document - covering tech stack used for implementation.

o   My desire for gRPC was for communication with a “sidecar proxy” (ie: official OCF interface communicates only with devices/LB and a sidecar proxy which communicates with pubsub, db, etc)

§  You can keep using pubsub for many things, but you’re abstracting away all “non-standard” implementation details (ex: GCP pubsub vs kafka vs NATS)

§  I think we are agreeing when you say “only use gRPC for commands”. But I think we disagree on which commands you use it with J

Depends where you want to do this abstraction. If through the sidecar proxy, or directly prepare the code of components and user will modify the codebase IF technology (kafka,NATS,...) is not supported. In my opinion, it would be overkill to use sidecar for making it messaging technology transparent. Let's see, let's discuss it on the slack.

o   If you use 1 channel per event type, that is different that Mainflux which uses ~1 NATS channel per device. Does this mean that services will receive many “irrelevant” events since they receive events for all devices? Can that scale to millions of devices?

Question is, can you scale channel per device to milions of devices? It's best practice to have event type per channel / topic. And it's not a good idea to have topic per entity, like user, device, ... But of course we have to consider everything. Also, implementation "detail", out of scope of current doc.

·         I proposed redis as an alternative to relying on the message queue for persistence. This allows more implementation flexibility (my goal is to make an implementation that uses as many CNCF projects as possible). I  am not 100% confident in that proposal, I look forward to your response.

Message queue is not a persistence. The Kafka can't be used for event sourcing, nor the NATS Streaming. These are not event stores. 

In general, there are two options. Delegate transaction defined in the IRepository to 3rd party component - for example EventStore(https://eventstore.org/) or handle this transaction in our code - what is making things more complicated. Of course it looks easy, but it has many bottlenecks. We're now evaluating possible options in this area.

·         I disagree with the decision to automatically observe every resource. For my (consumer electronics) use case, there are many times that I want to observe a resource, but I don’t often want to observe EVERY resource. I am 100% in agreement that it should be easy/standard to be able to observe resources, but that should be a later step after initial device provisioning (ex: have your client send an observe request to the device via the cloud after the device has been provisioned and signed in. The device will see this as the cloud sending the observe request and respond accordingly. There’s still details that would need to be hashed out, but I want to get your feedback on this comment.

It's the core requirement to observe everything. Otherwise you can't provide up-to-date resource shadow, what leads to - forward every GET to the device. And this does not make sense. 

 

From: Ondrej Tomcik [mailto:Ondrej.Tomcik@...]
Sent: Thursday, August 9, 2018 12:06 PM
To: Scott King <Scott.King@...>; Max Kholmyansky <max@...>
Cc: iotivity-dev@...; Gregg Reynolds (dev@...) <dev@...>; Jozef Kralik <Jozef.Kralik@...>; Peter Rafaj <Peter.Rafaj@...>; JinHyeock Choi (jinchoe@...) <jinchoe@...>
Subject: RE: [dev] OCF Native Cloud 2.0

 

Hello Scott!

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Scott King [mailto:Scott.King@...]
Sent: Thursday, August 9, 2018 4:40 PM
To: Tomcik Ondrej; Max Kholmyansky
Cc:
iotivity-dev@...; Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: RE: [dev] OCF Native Cloud 2.0

 

Ondrej,

 

First off, congrats on publishing such an extensive document!

 

·         Maybe I’m not looking in the right place, but I’m not seeing much explanation for how this architecture optimizes for making it easy to integrate OCF cloud messaging into existing infrastructure/architecture (especially for amazon/google/IBM/azure to offer it as part of their current IoT managed services).

This will be part of implementation. Published document is not limiting you in this area, but does not describe how to achieve it. It’s implementation “detail”.

 

·         You state that L7 load balancing is an option for CoAP. It was my understanding that no load balancers support L7 load balancing with CoAP. Don’t you also need to stick to L4 because the OCF device relies on a long-lived connection? I could be wrong, so let me know.

Good point. I didn’t investigate If L7 load balancing for the CoAP exists. I mentioned it because it is an option, as the CoAP is very similar to the HTTP and it can be implemented.

And regarding long lived tcp connections, I am not sure. Why you couldn’t have open TCP connection to the L7 load balander, and distribute requests to other components based on CoAP data? I might be missing something.

 

·         I’m concerned that ES/pubsub aren’t preferable over point-to-point HTTP/gRPC communication for some of the use cases in your diagrams. For example, if the device is trying to sign in to a coap gateway, shouldn’t the auth service give a response to the OCF gateway’s token validation request rather than publishing an event itself? Can you help me better understand who else needs to be immediately notified of a successful login other than the gateway?

                EventSourcing and gRPC does not fit together. CQRS and gRPC yes. Where you have events, you have the Event Bus. For example Kafka + protobuf. Where you have  commands, gRPC might be a solution, or again EventBus used as a Command Queue. The response for the sign-in is in form of an event just because of non-blocking communication. Whole communication in the OCF Native Cloud is non-blocking. So the OCF CoAP Gateway will issue a command to the AuthorizeService to verify sign-in token and not wait for the response. It may take some time, it may introduce delay in whole system, block the gateway. Therefore, the OCF CoAP Gateway is listening on the events (SignedIn), map it with the issued request and reply to the device. It’s also scalable, you can have scaled AuthorizationService and issue SignIn command to the CommandQueue. Mostly available AuthorizationService will take it from the queue, process and raise an event that it was processed. So, it’s not about “who else needs to be immediately notified” but about non-blocking communication and scalability.

·         How many pubsub channels are required per device in order to implement your architecture?

·         I didn’t defined yet organization of channels, but usually, channel per event type.

·         Would we benefit from an in-memory DB like redis to handle persisting device shadow and device presence/login status?

·         You don’t need redis at all. Resource shadow is stored as a series of ResourceRepresentationUpdated events in the event store. When the ResourceShadow service is loaded, it will just load these events for every resource and subscribe to this event. So the resource shadow is updated immediately when such an event occurs. You can restart It or scale it, It will again load everything + subscribe. In-memory db is enough.

·         Given the importance of alexa/google assistant functionality for commercial adoption, I would hope that we can work together to ensure workflow compatibility and develop examples for this feature

Sure

·         Can you confirm that you plan to automatically observe all resources that get published to the cloud?

Confirmed

 

I feel like we need to make a stronger distinction between the minimum feature set to satisfy the OCF spec and the additional features that we all want that’s out of spec, like device shadow. Can you confirm whether this architectural proposal means that you aren’t interested in the gRPC API that I proposed?

Proposed protobuf spec can be used. But just for commands.

 

Regards,
Scott

 

 

From: Ondrej Tomcik [mailto:Ondrej.Tomcik@...]
Sent: Thursday, August 9, 2018 9:38 AM
To: Max Kholmyansky <
max@...>
Cc:
iotivity-dev@...; Scott King <Scott.King@...>; Gregg Reynolds (dev@...) <dev@...>; Jozef Kralik <Jozef.Kralik@...>; Peter Rafaj <Peter.Rafaj@...>; JinHyeock Choi (jinchoe@...) <jinchoe@...>
Subject: RE: [dev] OCF Native Cloud 2.0

 

Inline J

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 3:31 PM
To: Tomcik Ondrej
Cc:
iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: Re: [dev] OCF Native Cloud 2.0

 

Thanks, Ondrej.

 

Just to clarify what I meant by the "server state".

My question was not about the connectivity, but rather the actual state of the resources. 

Say, the "OCF Server" is a Light device. 

To know if the light is ON - I can query via GET.

 

I see

 

But I may also need to:

1. React on the server side on the change of the state (light ON / OFF) - without having an OCF client connected.

2. Keep the history of the state changes (for analytics or whatever)

 

Each change which occurs on the OCF Device side (ResourceChanged) is propagated to the Resource Aggregate (ResourceService). Resource Aggreagete will raise an event that resource was changed and store it to the event-store. That means that you have whole history what was changed during the time the device was online. ResourceShadow is listening on these events(ResourceRepresentationUpdated events) and building ResourceShadow viewmodel. You, if you are interested in this event, can of course subscribe as well and react to every ResourceRepresenationUpdated event. It’s the eventbus(kafka,rabbitmq,…) where every event is published to and whoever (internal component) can subscribe. OR OCF Client can subscribe through the GW, which is also from that moment listening on that specific topic.

Does it make sense?

 

The question is how I can solve those requirements.

 

Is there a productized interface to receive cross-account notifications on the resource state changes? 

 

 

 

Regards

Max.

 

On Thu, Aug 9, 2018 at 4:15 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Hello Max,

Thanks for your message.

 

Please see my inline comments.

 

 

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 2:58 PM
To: Tomcik Ondrej
Cc:
iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter
Subject: Re: [dev] OCF Native Cloud 2.0

 

Hi Ondrej,

 

Thanks for sharing the design.

 

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?

Yes, this document is still technology agnostic. Soon we will introduce selected technology stack. Or let’s say roadmap for supported technologies.

Implementation is in the golang, but technologies like message broker / db / event store are being evaluated. But the goal is to not force users to use certain db or broker. It should be generic and user should be able to use what he prefers. Or use cloud native service.

 

 

I have 2 areas in the document I would like to understand better.

 

1. OCF CoAP Gateway

 

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

Yes. This allows you to introduce a new gateway, for example HTTP one, and guarantee interoperability within the Cloud across multiple different devices.

 

It will be helpful to get an overall picture of the "other" components.

Other components, or let’s talk about implementation: ResourceService, AuthorizationService (sample will be provided but should be user specific), ResourceShadowService and ResourceDirectoryService (these two might be just one service).

 

 

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.

ResourceService is stateless, might be probably deployed also as lambda function (evaluating). AuthorizationService is user specific, ResourceShadow and ResourceDirectory are the read side, they might use just in-mem db, during start filled from event-store.

 

2. Resource Shadow

 

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

I assume that by client you meant OCF Client. Yes, you’re right.

 

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Yes

 

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?

You mean online / offline? It will be stored, complete history is stored. Each Gateway, in this implementation OCF CoAP Gateway has to issue the command to ResourceAggregate (ResourcesService) to set the device online / offline. As it is aggregate, you have whole history what has happened. Each change to resource is persisted. Including device status – online/offline.

 

 

 

The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?

Of course, cloud to cloud, or let’s say you have cloud deployment, where one component is the OCF Native Cloud and another one is your set of product services. You are not communicating with the OCF Native Cloud through the CoAP over TCP. You’re issuing directly GRPC requests and including the oauth token. Please check sample usage : https://wiki.iotivity.org/coapnativecloud#sample_usage

 

 

 

 

Best regards

 

Max.

 

 

 


-- 

Max Kholmyansky

Software Architect - SURE Universal Ltd.

 

 

 

 

 

 

 

 

 

On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.


Scott King <Scott.King@...>
 

I have a tough time reading inline comments. I hope this is acceptable format.

 

·         This will be part of implementation. Published document is not limiting you in this area, but does not describe how to achieve it. It’s implementation “detail”.

o   If you want multiple backend implementations for a given interface like the OCF cloud, then you need to make things very easy and simple. I would assert that any implementation details “behind” the interface (like CQRS architecture) should be kept in the github repo. The wiki shouldn’t be targeting devs who are working on your codebase, it should be targeting devs who want to use your codebase in production.

·         L7 load balancing

o   If you want to add coap functionality to a popular LB like nginx or envoy (preferably envoy because of CNCF membership and no “enterprise” tier) then we should discuss that. It would be a great contribution to the ecosystem. I don’t see why you couldn’t implement L7 routing as long as the LB maintained the long lived connection instead of the OCF interface (you’d need to persist state of the device, like being logged in, somewhere though. Maybe a redis db?)

·         ES/gRPC

o   Golang can use a gRPC API in a non-blocking manner via goroutines. I think you have a good point, but just didn’t explain it well J

o   My desire for gRPC was for communication with a “sidecar proxy” (ie: official OCF interface communicates only with devices/LB and a sidecar proxy which communicates with pubsub, db, etc)

§  You can keep using pubsub for many things, but you’re abstracting away all “non-standard” implementation details (ex: GCP pubsub vs kafka vs NATS)

§  I think we are agreeing when you say “only use gRPC for commands”. But I think we disagree on which commands you use it with J

o   If you use 1 channel per event type, that is different that Mainflux which uses ~1 NATS channel per device. Does this mean that services will receive many “irrelevant” events since they receive events for all devices? Can that scale to millions of devices?

·         I proposed redis as an alternative to relying on the message queue for persistence. This allows more implementation flexibility (my goal is to make an implementation that uses as many CNCF projects as possible). I  am not 100% confident in that proposal, I look forward to your response.

·         I disagree with the decision to automatically observe every resource. For my (consumer electronics) use case, there are many times that I want to observe a resource, but I don’t often want to observe EVERY resource. I am 100% in agreement that it should be easy/standard to be able to observe resources, but that should be a later step after initial device provisioning (ex: have your client send an observe request to the device via the cloud after the device has been provisioned and signed in. The device will see this as the cloud sending the observe request and respond accordingly. There’s still details that would need to be hashed out, but I want to get your feedback on this comment.

 

From: Ondrej Tomcik [mailto:Ondrej.Tomcik@...]
Sent: Thursday, August 9, 2018 12:06 PM
To: Scott King <Scott.King@...>; Max Kholmyansky <max@...>
Cc: iotivity-dev@...; Gregg Reynolds (dev@...) <dev@...>; Jozef Kralik <Jozef.Kralik@...>; Peter Rafaj <Peter.Rafaj@...>; JinHyeock Choi (jinchoe@...) <jinchoe@...>
Subject: RE: [dev] OCF Native Cloud 2.0

 

Hello Scott!

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Scott King [mailto:Scott.King@...]
Sent: Thursday, August 9, 2018 4:40 PM
To: Tomcik Ondrej; Max Kholmyansky
Cc:
iotivity-dev@...; Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: RE: [dev] OCF Native Cloud 2.0

 

Ondrej,

 

First off, congrats on publishing such an extensive document!

 

·         Maybe I’m not looking in the right place, but I’m not seeing much explanation for how this architecture optimizes for making it easy to integrate OCF cloud messaging into existing infrastructure/architecture (especially for amazon/google/IBM/azure to offer it as part of their current IoT managed services).

This will be part of implementation. Published document is not limiting you in this area, but does not describe how to achieve it. It’s implementation “detail”.

 

·         You state that L7 load balancing is an option for CoAP. It was my understanding that no load balancers support L7 load balancing with CoAP. Don’t you also need to stick to L4 because the OCF device relies on a long-lived connection? I could be wrong, so let me know.

Good point. I didn’t investigate If L7 load balancing for the CoAP exists. I mentioned it because it is an option, as the CoAP is very similar to the HTTP and it can be implemented.

And regarding long lived tcp connections, I am not sure. Why you couldn’t have open TCP connection to the L7 load balander, and distribute requests to other components based on CoAP data? I might be missing something.

 

·         I’m concerned that ES/pubsub aren’t preferable over point-to-point HTTP/gRPC communication for some of the use cases in your diagrams. For example, if the device is trying to sign in to a coap gateway, shouldn’t the auth service give a response to the OCF gateway’s token validation request rather than publishing an event itself? Can you help me better understand who else needs to be immediately notified of a successful login other than the gateway?

                EventSourcing and gRPC does not fit together. CQRS and gRPC yes. Where you have events, you have the Event Bus. For example Kafka + protobuf. Where you have  commands, gRPC might be a solution, or again EventBus used as a Command Queue. The response for the sign-in is in form of an event just because of non-blocking communication. Whole communication in the OCF Native Cloud is non-blocking. So the OCF CoAP Gateway will issue a command to the AuthorizeService to verify sign-in token and not wait for the response. It may take some time, it may introduce delay in whole system, block the gateway. Therefore, the OCF CoAP Gateway is listening on the events (SignedIn), map it with the issued request and reply to the device. It’s also scalable, you can have scaled AuthorizationService and issue SignIn command to the CommandQueue. Mostly available AuthorizationService will take it from the queue, process and raise an event that it was processed. So, it’s not about “who else needs to be immediately notified” but about non-blocking communication and scalability.

·         How many pubsub channels are required per device in order to implement your architecture?

·         I didn’t defined yet organization of channels, but usually, channel per event type.

·         Would we benefit from an in-memory DB like redis to handle persisting device shadow and device presence/login status?

·         You don’t need redis at all. Resource shadow is stored as a series of ResourceRepresentationUpdated events in the event store. When the ResourceShadow service is loaded, it will just load these events for every resource and subscribe to this event. So the resource shadow is updated immediately when such an event occurs. You can restart It or scale it, It will again load everything + subscribe. In-memory db is enough.

·         Given the importance of alexa/google assistant functionality for commercial adoption, I would hope that we can work together to ensure workflow compatibility and develop examples for this feature

Sure

·         Can you confirm that you plan to automatically observe all resources that get published to the cloud?

Confirmed

 

I feel like we need to make a stronger distinction between the minimum feature set to satisfy the OCF spec and the additional features that we all want that’s out of spec, like device shadow. Can you confirm whether this architectural proposal means that you aren’t interested in the gRPC API that I proposed?

Proposed protobuf spec can be used. But just for commands.

 

Regards,
Scott

 

 

From: Ondrej Tomcik [mailto:Ondrej.Tomcik@...]
Sent: Thursday, August 9, 2018 9:38 AM
To: Max Kholmyansky <
max@...>
Cc:
iotivity-dev@...; Scott King <Scott.King@...>; Gregg Reynolds (dev@...) <dev@...>; Jozef Kralik <Jozef.Kralik@...>; Peter Rafaj <Peter.Rafaj@...>; JinHyeock Choi (jinchoe@...) <jinchoe@...>
Subject: RE: [dev] OCF Native Cloud 2.0

 

Inline J

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 3:31 PM
To: Tomcik Ondrej
Cc:
iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter; JinHyeock Choi (jinchoe@...)
Subject: Re: [dev] OCF Native Cloud 2.0

 

Thanks, Ondrej.

 

Just to clarify what I meant by the "server state".

My question was not about the connectivity, but rather the actual state of the resources. 

Say, the "OCF Server" is a Light device. 

To know if the light is ON - I can query via GET.

 

I see

 

But I may also need to:

1. React on the server side on the change of the state (light ON / OFF) - without having an OCF client connected.

2. Keep the history of the state changes (for analytics or whatever)

 

Each change which occurs on the OCF Device side (ResourceChanged) is propagated to the Resource Aggregate (ResourceService). Resource Aggreagete will raise an event that resource was changed and store it to the event-store. That means that you have whole history what was changed during the time the device was online. ResourceShadow is listening on these events(ResourceRepresentationUpdated events) and building ResourceShadow viewmodel. You, if you are interested in this event, can of course subscribe as well and react to every ResourceRepresenationUpdated event. It’s the eventbus(kafka,rabbitmq,…) where every event is published to and whoever (internal component) can subscribe. OR OCF Client can subscribe through the GW, which is also from that moment listening on that specific topic.

Does it make sense?

 

The question is how I can solve those requirements.

 

Is there a productized interface to receive cross-account notifications on the resource state changes? 

 

 

 

Regards

Max.

 

On Thu, Aug 9, 2018 at 4:15 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Hello Max,

Thanks for your message.

 

Please see my inline comments.

 

 

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 

From: Max Kholmyansky [mailto:max@...]
Sent: Thursday, August 9, 2018 2:58 PM
To: Tomcik Ondrej
Cc:
iotivity-dev@...; Scott King <Scott.King@...> (Scott.King@...); Max Kholmyansky (max@...); Gregg Reynolds (dev@...); Kralik Jozef; Rafaj Peter
Subject: Re: [dev] OCF Native Cloud 2.0

 

Hi Ondrej,

 

Thanks for sharing the design.

 

It seems like the design document is technology agnostic: it does not mention any specific technology used for the implementation. Yet you mention that the implementation is in progress. Does it mean that the technology stack was already chosen? Can you share this information?

Yes, this document is still technology agnostic. Soon we will introduce selected technology stack. Or let’s say roadmap for supported technologies.

Implementation is in the golang, but technologies like message broker / db / event store are being evaluated. But the goal is to not force users to use certain db or broker. It should be generic and user should be able to use what he prefers. Or use cloud native service.

 

 

I have 2 areas in the document I would like to understand better.

 

1. OCF CoAP Gateway

 

If my understanding is right, this component is in charge of handling the TCP connectivity with the connecting clients and servers, while all the logic is "forwarded" to other components, using commands and events. Is it right?

Yes. This allows you to introduce a new gateway, for example HTTP one, and guarantee interoperability within the Cloud across multiple different devices.

 

It will be helpful to get an overall picture of the "other" components.

Other components, or let’s talk about implementation: ResourceService, AuthorizationService (sample will be provided but should be user specific), ResourceShadowService and ResourceDirectoryService (these two might be just one service).

 

 

You mention that the "Gateway" is stateful by nature, due to the TCP connection. What about the other components? Can they be stateless, so the state will be managed in a Data Store? This may be helpful from the scaling perspective.

ResourceService is stateless, might be probably deployed also as lambda function (evaluating). AuthorizationService is user specific, ResourceShadow and ResourceDirectory are the read side, they might use just in-mem db, during start filled from event-store.

 

2. Resource Shadow

 

If I got it right, the architecture assumes that the cloud keeps the up-to-date state of the server resources, by permanently observing those resources, even if no client is connected. Is it right?

I assume that by client you meant OCF Client. Yes, you’re right.

 

Does it mean that a "query" (GET request) by a client can be answered by the cloud, without need to query the actual server?

Yes

 

Will there be a mechanism to sore the history of the server state? What will be needed to develop such a functionality?

You mean online / offline? It will be stored, complete history is stored. Each Gateway, in this implementation OCF CoAP Gateway has to issue the command to ResourceAggregate (ResourcesService) to set the device online / offline. As it is aggregate, you have whole history what has happened. Each change to resource is persisted. Including device status – online/offline.

 

 

 

The last point... If I got it right, the only way to communicate is via TCP connection using TLS. This may be good enough for servers like smart home appliances, and clients like mobile apps on smartphones. But there is also a case of cloud-to-cloud integration: say, voice commands to be issues by a 3rd party cloud. In the cloud-to-cloud case, I doubt it's a good idea to require the overhead of a TCP connection per requesting user. Is there any solution for cloud to cloud scenario in the current design?

Of course, cloud to cloud, or let’s say you have cloud deployment, where one component is the OCF Native Cloud and another one is your set of product services. You are not communicating with the OCF Native Cloud through the CoAP over TCP. You’re issuing directly GRPC requests and including the oauth token. Please check sample usage : https://wiki.iotivity.org/coapnativecloud#sample_usage

 

 

 

 

Best regards

 

Max.

 

 

 


-- 

Max Kholmyansky

Software Architect - SURE Universal Ltd.

 

 

 

 

 

 

 

 

 

On Thu, Aug 9, 2018 at 2:48 PM, Ondrej Tomcik <ondrej.tomcik@...> wrote:

Dear IoTivity devs,

 

Please be informed that the new Cloud 2.0 design concept is alive: https://wiki.iotivity.org/coapnativecloud

Your comments are warmly welcome.

Implementation is in progress.

 

BR

 

Ondrej Tomcik :: KISTLER :: measure, analyze, inovate

 



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.



 

--

Max Kholmyansky

Software Architect - SURE Universal Ltd.


Gregg Reynolds
 



On Thu, Aug 9, 2018, 4:04 PM Ondrej Tomcik <ondrej.tomcik@...> wrote:

...

It's the core requirement to observe everything.

According to whom? It's certainly not a core requirement for me.

Otherwise you can't provide up-to-date resource shadow, what leads to - forward every GET to the device. And this does not make sense. 

The ability to choose which resources to observe makes perfect sense to me.

G