locked
Extending SCCM to AWS RRS feed

  • Question

  • Hi everybody.

    We are starting a project to move part of our infrastructure to AWS (Amazon Web Services‎). I estimate that will be 100-200 servers initially and within next few years that number may grow up to 500. Since we would like to keep those servers patched the plan is to extend our existing SCCM 2012 R2 to AWS. It is still not clear what to start with: DP, DP+MP, DP+MP+SUP or maybe a Secondary Site.

    At the moment we are not using SSL so the main concern is encryption and proper authentication between the servers in AWS and the rest of infrastructure. We already have PKI and I found a plenty of links on implementing SSL. However it is still not clear what would the best approach in our case. In particular we are concerned how to make sure the client or SCCM component is talking to the right server but not to something having a valid certificate from a trusted authority.
    • Edited by v_i_v Tuesday, August 23, 2016 4:04 PM
    Tuesday, August 23, 2016 3:35 PM

Answers

  • First, forget that this AWS as that's really irrelevant. As long as the traffic is routable and the systems are reachable, it's just another location. Thus, this would be like any other location that you have in your environment as that's all that it really is.

    What you would place in AWS to support ConfigMgr mainly depends upon the connectivity between the current ConfigMgr site server and database and AWS. As noted though, the criteria are no different than any remote location.

    As for security, there is only one approach to using HTTPS client communication so not sure what's hindering you there.

    As for your final concern, what else would they talk to? I don't understand the concern. The ConfigMgr agent isn't just going to magically start communicating with an Exchange server. The agents follow the normal affinity rules for ConfigMgr site roles. What exactly are you concerned could happen?


    Jason | http://blog.configmgrftw.com | @jasonsandys

    Wednesday, August 24, 2016 8:49 AM
  • MP affinity was introduced in 2012 SP2 CU2 and 2012 R2 SP1 CU2. There is no SUP affinity though. You just have to rely on the client failing to reach a SUP and then switching to one that it can reach.

    I still don't understand your last question. Clients always use certificates to identify and authenticate to the site. With HTTP communication, this is done using self-signed certs created by the client agent when it is installed. For HTTPS communication, this is done using PKI certs issued to the systems by a PKI. MP to site server has nothing to do with client communication so not sure how that fits in with your question. Adding a trusted root cert authority is done to enable cert selection process on the client in cases where the client may have multiple PKI issued client auth certs and is also used during OSD (which probably isn't relevant here).


    Jason | http://blog.configmgrftw.com | @jasonsandys

    Tuesday, August 30, 2016 4:04 PM

All replies

  • First, forget that this AWS as that's really irrelevant. As long as the traffic is routable and the systems are reachable, it's just another location. Thus, this would be like any other location that you have in your environment as that's all that it really is.

    What you would place in AWS to support ConfigMgr mainly depends upon the connectivity between the current ConfigMgr site server and database and AWS. As noted though, the criteria are no different than any remote location.

    As for security, there is only one approach to using HTTPS client communication so not sure what's hindering you there.

    As for your final concern, what else would they talk to? I don't understand the concern. The ConfigMgr agent isn't just going to magically start communicating with an Exchange server. The agents follow the normal affinity rules for ConfigMgr site roles. What exactly are you concerned could happen?


    Jason | http://blog.configmgrftw.com | @jasonsandys

    Wednesday, August 24, 2016 8:49 AM
  • Jason,

    Thank you very much.

    There is a difference between AWS and other locations. Our InfoSec  and Network guys wouldn't like to open ports from every server in AWS to our main location like we currently have for our remote sites. Connection is over private lines but I guess that is related to the way how ACLs are being managed in AWS.

    So my guess is we need DP(s)+MP(s)+SUP(s) to be in AWS and have connections permitted from them to our main site. We also need somehow to bind AWS clients to them. It is easy for DPs but I am not sure how to do that for MPs and SUPs.

    As of HTTPS the question was how will AWS server (client) authenticate itself to the server, e.g. MP to Site Server. That from my understanding can be achieved by specifying ‘Trusted Root Certification Authorities’  under Client Computer Communication in Site Properties. Just want to confirm that should do the trick.


    • Edited by v_i_v Tuesday, August 30, 2016 3:58 PM
    Tuesday, August 30, 2016 3:53 PM
  • MP affinity was introduced in 2012 SP2 CU2 and 2012 R2 SP1 CU2. There is no SUP affinity though. You just have to rely on the client failing to reach a SUP and then switching to one that it can reach.

    I still don't understand your last question. Clients always use certificates to identify and authenticate to the site. With HTTP communication, this is done using self-signed certs created by the client agent when it is installed. For HTTPS communication, this is done using PKI certs issued to the systems by a PKI. MP to site server has nothing to do with client communication so not sure how that fits in with your question. Adding a trusted root cert authority is done to enable cert selection process on the client in cases where the client may have multiple PKI issued client auth certs and is also used during OSD (which probably isn't relevant here).


    Jason | http://blog.configmgrftw.com | @jasonsandys

    Tuesday, August 30, 2016 4:04 PM