Microsoft Azure Government: Security Differentiation (GOV)

Microsoft Azure Government: Security Differentiation (GOV)

Well thank you everybody for
coming to my session, Azure Government Security and
Differentiation. I am Matt Rathbun. I am the Chief Information Security
Officer for Azure Government. So about the first half of
the presentation is really going to be talking about
thing that are unique from a security compliance
perspective about Azure Government. Reasons why you might select Azure
Government from either of those paradigms. The second half of the presentation
is leading a little bit more forward into our security philosophy tools,
approaches, things that we do from our back end,
things that are coming very soon for you in the future, in ways that
we’re actually trying to change the overall cybersecurity landscape
using the power of the cloud and the inflection point
that we have here. Also, while I do have
a full slide deck I’m happy if we don’t get to most of it. I like to have conversation and
interaction as much as possible so please, please feel
to ask questions. Also, I can’t see any of you
because of the stage lights so just feel free to,
in addition to raise your hand, shout out if it doesn’t
seem like I’m catching you. I’m fine with being interrupted. It’s more fun for
me when it’s a conversation. Be getting started it in general so,
what, what is Azure Government? Why might I select Azure Government
over other offerings and capabilities? It is a combination really it’s
identity management client infrastructure and some, some pretty advance workflows
we saw in the last session. A lot of platform as
a service capabilities, our cognitive services capability,
all sorts of really interesting things that you can
do that go above and beyond just data center in the cloud
kind of infrastructure capabilities. But more than that it has some
unique activities around who can manage it, so first of all it’s
located solely in the United States. It is a cloud that is restricted
only to government entities or to partners who are working on
behalf of government entities. It is managed exclusively
by US citizens. So they are screened, and they must
be citizens of the United States. We don’t even do a US
persons sort of clause, which you can get away with
in an ITAR sort of world. Anybody who has access to
production is a citizen. And, like I mentioned,
it is for the government only. So you have to be a first-party
government entity. Or you have to be able to approve
that you are working on behalf of a first party you have in entity,
or have access to restricted data. So if you are for
example a defense contractor and you have assigned auditory letter
from this secretary or from the department of state about not
being able to do export on certain things that you work on, then yes,
you can come into this environment. but we don’t want entities,
we don’t want Pepsi and Coke in here unless there is some
sort of secret energy drink I’m not aware of, then they can bring
that data in and that’s fine. Reasons why you might choose Azure
Government over other offerings. The first one is exclusivity. This is very important for us. We think it’s very important to
maintain a community of like-minded or like-required set of instruments. Not all of our competitors
see that the same way. That if you look at their government
entity, you will actually, if you can afford to
pay their upcharge, they will allow anyone who wants
to to come in to that space. We really do restrict and
guarantee there’s a very formal and rigorous eligibility process you
must go through to become a customer of Azure Government. If you’re interested in checking to
see whether or not you can do that yourself, you can actually sign
up for an Azure Government trial. So you can sign up for a free trial. That free trial puts you through
the eligibility process so we can determine whether or
not your data and your organization qualifies
to be inside of our space. We have more reach from
a governmental perspective than any of our other competitors. We’re the most certified
cloud on the planet, so from a trust perspective. And then really, hybrid flexibility. We’re the only HyperScale provider
that’s making a significant investment in the hybrid capability,
so making things work both on-prem and in the cloud,
and then also with other clouds. And then flexible, meaning we don’t
have a one size fits all solution. As Zack mentioned last time this is
not Microsoft from ten years ago. We don’t expect you to
use only old products. We use open source and
open standards wherever possible so that you can leverage our tools and techniques in multiple
instantiations. You can run things like
Linux in our cloud. 33% of my global workloads
run Linux today. Just as a little bit of proof,
you may not notice but I’m presenting off a Macintosh. This is not a Microsoft for the old where you have to
use Microsoft products. We were enabling an ecosystem that
allows you to use whatever is necessary for your application,
however that may be, wherever it may sit, and really moving and
taking power to the cloud. I talked about exclusive
to the Azure Government or exclusive to our
government customers, the easiest way to check
that is to do the trial. And if you’ve seen some of our other
presentations you will have seen some of this already. This is our existing footprint for our government capabilities
both in Azure and Office. You can see we currently have two
regions, regions are large data center campuses means really,
they’re effectively multiple what you would think of data centers
in each of these regions. They’re all within ten miles of each
other at the largest radius because we want to deal with
transactional speeds. The speed of light becomes a problem
once you go beyond ten miles, so everything’s closer than that. But otherwise there are very very
large infrastructure campuses where we guarantee things like three
rights within our location. We can do high availability
configuration so you can be on multiple
co-location centers, multiple potential buildings,
different power sources. So that a fire in a single location
won’t take out all your access and then in addition to that you have
geo-replication capabilities. So right now we have
two Azure data centers, government data centers online,
we have three for Office 365. They are all separated
by at least 500 miles. So you can guarantee that civil
unrest or a natural disaster. Or something else that might take
out, somehow, one of my entire campuses, won’t take out
the whole cloud infrastructure. We have the ability to connect
into those resources and infrastructures directly. Meaning you do not have to come
through the public Internet. You do not have to transit
the public Internet. These are all meet me points where
you can drop directly on Microsoft’s dark fiber network. We own 1.8 million miles
of dark fiber worldwide that we use just to connect our
own infrastructure together. We do that for a few reasons. One, we can sort of guarantee
really low latency between our data centers because we control the path. We wanna remove as much speed
of light problem as we can from the existing infrastructure and
capabilities. The other thing is we just wanna
make sure that there’s a lot of customers we have who are concerned
about either trust of the Internet, or really,
I have a lot of military customers. What happens in a cyber event if
the Internet is not available or is not trustworthy at all? So we went and enabled architecture
that you can actually cut off capability from the Internet. If the Internet were to go away,
Azure our cloud does not go away and you still have
the capability to maintain those mission critical workloads
through these direct connections. We have new data centers coming
online here in the very near future. So we’re further expanding
that geo-replication promise. They’re gonna be in preview
in the next few months and then coming generally available
at the middle of the year so that you will be able to put your
location, your data center, or your replication application
into those spaces. Both from a geo-redundancy backup
DR kind of standpoint, but also if you want to do some
software load-balancing. If you have an application where you
want to be able to really just do active, active all over the place
and you wanna be able to tie that connection into that architecture
wherever the user is closest, you’ll be able to do that. So you can have for example stand
up in Phoenix and in Virginia and spread your load across the country based upon where your users
are likely to be coming from. And then we’re seeing,
at the same time, when we bring on those Phoenix and
San Antonio new data centers. We’ll be adding more express route
infrastructure to that location. The other thing that’s really
great about express route, is, not only is it a direct connection. It’s an incredibly
high speed connection. It’s up to ten gigabits,
bonded pairs. You can burst that bonded pair up
above ten together using it but we always do the pair so that we can
guarantee that if one goes down, you still don’t lose
your whole connection. And those are available through
carrier neutral third party sites. So you can do meet me points where
you’re able to access based on whichever service provider you like. But you can drop your own MPLS
network or whatever architecture you want at any of those sites
directly on to our network, making it easier to get your users
connected into our infrastructure and removing that risk of going
through the public Internet. And then finally, if you are a
defense contractor, we have cap location points so if you have to
go through the cloud access point. We’re also helping
negotiate in conversations with the government about
trying to understand, or with the DOD about some challenges
that have happened at the CAPS. So hopefully coming soon we’ll have
some interesting announcements along those but we’ve been partnered
with just since the very beginning on this process and are making a lot
of progress from that perspective. I mentioned that we are the most
trusted cloud on the planet. What that means is that we currently
have 53 regimes that have signed off on the security of our
structure around the world. So we have independent third
parties coming to do assessments. The most critical ones here in the
United States we have moderate and high on our government
infrastructure, just L2, L4, and L5 on a couple of dedicated DOD
specific regions where we do physical isolation to DOD first
party only in both Virginia and Iowa. So that same idea that our regions
are multi data center campuses. That means that they’re dedicated
infrastructure stacks within that that is for the DOD only
in both of those locations. That’s how we get the physical
isolation requirements of a level five 870171, so
if you are a defense contractor and you have to meet those requirements,
we support and have that capability against all
of those large infrastructures. The next thing that’s important
though is it’s just not that we have all the certifications,
that we have all those shields. It’s also that we have more services
covered under each one of those. So, if you for example,
compare my next largest competitor, they have nine or ten total
services that are available in any sort of their governmental
level of packages, and if you actually look at their
government regions, and there are things for the military
allow they’re down to six. So six versus 32 it’s what is what
there are currently available. There are 37 services actually
deployed in Azure Government. We have on the road map those
additional, I’m sorry 39 services. We have on the road mapped those
additional seven services will be in our scope by the end of June. So, we have worked really,
really hard with our regulators, we actually developed a process that
they’re gonna be publishing to be used by all cloud providers. That moved us from an average of
18 months to add a new service or feature down to 60 days. So the moment Zach gives me
a service in Azure Government, I start my clock. I have 60 days, and I can actually
get it entirely through my internal review processes and
the federal interview processes. And soon to be the DISR review
processes through reciprocity for all of that environment
capabilities so that you’ll be able to light it up. We wanna make sure that our most
important services are available to our government customers
as soon as possible. What’s interesting about this
is although we have more total customer-facing services in
our commercial environment. We have far more services certified
in our government environment, because we have those higher
stringent requirements around citizenship and
other security concerns. While the code is the same running
in both places, our process for maintaining them is
a little different. We’ve made our compliance effort
focused on our government environment, so you have 12
services currently in public. We have 32 services in government,
and it’ll be 39 within
the next few months. Then to make it easy for you all to
be able to take advantage of that, we developed a program specifically
to help customers who build on top of our architecture
achieve compliance or achieve security goals. We don’t want you to have to
figure this out on your own. We have spent north of $100
million when we’re working with our own consultants and
auditors just on consulting and auditing fees to get
those 53 regimes. We don’t expect any
of our customers, especially our government
customers who are spending tax dollars to have to spend in
that same sort of region. We wanna make this easier and
more applicable for you. So we have a five pillow program
that really helps achieve that goal for a more detail on
how some of that works. So we have deployed architecture. So you can actually go look at
through government documents sites or trust center. How the architecture of each
of our services is designed. So you can form a risk
analysis is to whether or not it is going to meet your
obligations and requirements. We also do publish a services list,
so you can see exactly where each of these services are,
against our very circuitry regimes. So if you need both FedRAMP and
PCI, what services are available, how can you mix and
match those capabilities. If there are services not on that
list but you need them, then we can do a NDA based conversation
to let you know whether or not that is on our road map and In
roughly when we think something like that might be coming into approach. We are working on building out
a catalog of automated deployment templates. So version one of that is
that three-tier architecture. If anybody caught the last session
they talked about a basic three tier architecture. Instead of having to build that
yourself, we’re gonna have pre-built templates that are already built to
meet the standards of these regimes. You one button click, and
it sets it out for you. It creates that three tier
architecture, that’s version one. It’s very similar to what our
competition has in place at the moment. Next level up,
coming in the very near future, probably more middle of the year
time frame, is then taking that idea of PowerShell scripting
in JSON extensions to say. Okay, well we’ve deployed
in our architecture, it all ready meets
basic requirements, but let’s do some more detailed,
more fine grained deployment. So I get that antivirus
deployed exactly right that I am enforcing password requirements,
all of that. So the goal is that we’re being able
to take over as much as we want or as much as we can of the security
responsibility from you. Or make it easy so that it’s just
a menu of one button clicks that you can select to make that
compliance move out of your way. The next tier is the pre-built
certification documentation. So we have that against all of these
government entities who might be interested in fairly moderate or
high. There’s the L-2, L-4, and L-5,
where this is not my documentation. This is not the 4000 pages that I
submit to the government to show that Azure Government
is actually secure. We’ve taken that and rewritten it so
that we have a 1000 page SSP template where a large percentage
of that is already filled out for you depending or not whether
your architect is IAZ or PAZ. We have already said that Microsoft
owns this security responsibility, and here’s what we
do on your behalf. This is what you were getting
automatically from Azure. We then socialize with those
regulators, meaning FedRAMP JAB has already approved through
our federal templates. The DISA PMO has already
approved our DISA templates. If you use them, they do not look at
all that information that we filled out for you. They restrict their review to just
the specific set that you own. That also means if you have a 3P
PO nad autographs to look at it. They only have to look
at the restricted set, that’s unique to you. Now level three of this next
documentation is to make sure that you are building out here as we are
gonna start tieing those automatic deployments and
certificate things back together. Right now for the places where you
still have responsibility and that security documentation we don’t just
say this is your responsibility and then just walk away. We give you examples of what
a passing answer will look like against that structure. We give you examples of what good architecture would look
like in that scenario. And wherever that’s possible, we provide you to links to Microsoft
Technology, technical articles, examples of what might
satisfy this requirement or how you could build it up. We’re gonna start pointing those
then back to the ARM templates, as we build out that library to say,
actually, just go deploy this arm template. And it will satisfy this control for
you. And then the really high level state
is when you click that ARM template. Not only is it going to deploy it, not only is it going to
configure it using PowerShell. It’s then going to use
YAML to go back and write into that documentation for
you, what it did. So that you don’t have
to document even that. The goal is, we’re gonna get as
close to 0% responsibility for you as possible. We do not want this to
be ever an impediment. In fact, the message of my team is
not that we’re supposed to build the most secure and compliant Cloud. It’s that our customers’ use of our
cloud should be the most secure and compliant Clouds on the planet. There’s no point in me building
security if you can’t consume it. There’s no point in me getting
compliant certification if you can’t consume it, and consume it
easily by making all that available. And finally we have two flavors of
expertise that we make available. If you are a commercial provider,
so not a government entity, because your budgets are more
difficult to deal with, but if you’re a commercial provider and you want to get a certification
build on top of Azure, and you don’t have your own internal
consultants that you all ready like. You can get access
to our consultants, we have a stable of those who
know our architecture well, and you get them at discounted rates. If we send you to them through
the blueprint program or vice versa, them to you
through the blueprint program. They have guaranteed that they will only charge you
what they charge Microsoft. Now, with some of
these organizations, I’m spending millions
of dollars a year. I get a large volume discount. They consider you part of that
overall Microsoft infrastructure, Microsoft capability. So you get my rates, not what
they would normally charge you, even if you just wanna
spend $20,000 or $30,000. This year we gonna get a lot
more value out of that than you normally would. The other thing that you
get access to is my staff. So me and my individuals who run
our blueprint program are available to do security architecture reviews
for you if you are stuck you can’t figure out how to particularly
meet it a given requirement or goal, or setup, against any of the regimes
that we currently have in place. We can review that for you. We will also, if you’re getting
stuck with your internal individuals inside a federal agency, or
you’re stuck with the PMO, or the FedRAMP JAB, we will come
along with you to your meeting. And explain to those organizations
why what you’ve done is exactly like what they’ve already approved
for us and leveraged the political capital we gain by getting through
their processes the hard way, the brute force way,
to streamline that for you. Now, we have tested this in several
federal agencies at this point and we’re seeing roughly a 50% decrease
in time from dev to production. Through the certification above and beyond what a normal federal agency
does for their own in house build. This isn’t Cloud complicated, this is, I’m showing a 50% decrease
in time for just a basic app that they would run in their own
infrastructure normally. And then we’re seeing if you have
to go all the way through getting a FedRAMP certification, you need the JAB to sign
off on your architecture. Up to a 73% reduction in cost
using all of these elements and productions and we’re doing this
in several different verticals against many different frameworks
and certification standards. We’re replicating all of
this out across our overall infrastructure and capability. The goal here is
certification compliance security should never be
an impediments to you. In fact, we think and we’ll get to this in a second
half of the presentation, the number one reason you should
move to the Cloud is security. And if I wanna get you into
that environment because it’s more secure, then let me solve
the compliance issue for you so that it’s also easier to
move in that direction. The last pieces are hybrid story and
the biggest or most effective tool for our overall hybrid
capability is our Operations and Management Suite, OMS. It’s the Microsoft Operations and
Management Suite. It also gets us
security capabilities. This is effectively a single pane of
glass, through which I can manage infrastructure no
matter where it sits. I can manage it in my on-prem
environment, I can manage it in other clouds, so long as those
other clouds allow me to install the agents, and I can talk back
to Azure through express route or through the public Internet, or
I can manage resources in Azure. I don’t have to think about
them as different things. It also allows me to do things like,
I have a traditional application. I came out of the financial
industry a very long time ago. We have a bunch of old architecture
that was written long before claims-based auth existed. It will be almost impossible
to migrate that stuff. It will have to be rewritten. Instead server 2016 running
on premises is Cloud aware. It knows of Azure, you can use the
automation service here in server 2016 to run an application. One of these old
applications on-prem. And that server runs
out of resources. It doesn’t have anymore storage,
it needs more processing power, more memory. Instead of reaching out and
trying to balance on prem, you can have it reach and
balance into the cloud. So you have all the architecture
that normally could not run in the cloud, you’re still running it in a
very limited footprint on premises. But have the capability as needed
to spin up additional resources in the cloud or automatically write, store it to cloud, using those
automation and backup features. I can, even if I just want to test
the waters, I just want to dip my toe in, I’m gonna cancel
my Iron Mountain contract, I’m gonna do live data application, you can do that very easily using
our back up services through OMS. So I can run an entirely
on premises infrastructure. You have an existing data center and
you just wanna migrate that over. And you wanna be able to, or
back up the important things. You can do all of that
through the cloud and through these on prem
hybrid capabilities. And then finally, you get a one all
sort of all up view of security and health analytics. So I’m getting security monitoring,
telemetry, configuration problems against my entire fleet, no matter
where it happens to be sitting. And then I can also use the log in
analytics engine to start doing searches, and doing really
sort of big data analytics against log data that’s
coming through my OMS suite. So let’s say, for
example we identify a new zero data that comes out and there’s a
signature that we can identify that, hey if this particular host for this particular process has ever
been run we know that’s a problem. I can actually then go back and
look through my log data using log analytics to look for
those signatures. And find out, did this ever
appear on any part of my fleet, no matter where that
exists around the world? So just an example of what
that console looks like, you can see we have assets,
we have servers set up worldwide. They’re in places where we don’t
have data centers in some of the locations, to give an example
of what that would look like. I can get security
analytics back against it, and the other thing is this isn’t
just a scene where I’m seeing stuff. I can actually click through that,
and that enables me to then start
doing configuration management. I can just do a couple
more clicks and actually fix things on the fly,
in the cloud. We’re taking care of a lot of
that telemetry and threat and concern for you. The next half, now, or slightly more
than half is transitioning into our security philosophy. Before we do that,
are there any outstanding questions on those core pillars,
our capabilities. What we’re offering in
Azure Government, and what we think makes Azure Government different
from other similar offerings on the market? [INAUDIBLE]
>>Yeah, absolutely.>>So the question is, what’s the difference between Azure
government and Azure non government? And what does that mean for
government entities or consumption? So our non government or
what we call Azure Commercial, is actually a worldwide
instantiation of Azure. It’s 38 regions around the planet,
currently, that we instantiate that. Now, you can pin your subscription
to stay just in the US, and make sure all of your
data stays in the US. So there’s not a data sovereignty
concern for US government entities, you can stay local. And that’s true actually for
any of our regions. You can do that in Canada if
you wanna stay in Canada. You wanna do that in the UK,
you can do that. Australia, that’s all our
global infrastructure. From a US government perspective,
we have a couple of differences. We only have FedRAMP moderate on
that environment, whereas we have FedRAMP high and just L4,
L5 on our government environment. We only make our and promises
in our government environment. We don’t make those against
our commercial infrastructure, because we can’t guarantee
only US citizens. So there will be non-US
citizens who have access to production environments
in our global fleet because that’s a globally
managed service overall. Now, the code is all the same, the
services function exactly the same. So even let’s say,
I’m a state government right? And I’m concerned about where I
wanna be and I notice that over here in Azure Government we have
certified a service, let’s say, HD Insights is on my road map, it’s
gonna be certified here very soon. It’s currently submitted
to the government. We’re gonna get that certified. You believe that that means that,
okay, we’re doing security well,
that’s trustworthy. But you actually wanna to run it
over in commercial for some reason. The service runs the same way,
right? So, the only difference is I’m
only doing the compliance uplift, the cost, that portion of a hundred
million dollars I have to spend to maintain all that in
the government environment. The tech is actually the same. The biggest difference is that you
can actually run all those global workloads really globally. So if you did have customers who for
whatever reason wanted to do, have user assets in Canada or China or
Europe and they wanted to be able to connect them more quickly, you can
do that in the global environment. That’s gonna be much more difficult
in the government environment. If you didn’t qualify, like you were
sort of a quasi-government entity like an EDU that may or not may
not qualify for our government eligibility but they still want
those same kind of protections. You can make some assumptions based
on what we’ve done in the gov that that is all still true In
the commercial environment, but we don’t take the time to get
them independently certified in the way that we’ve done. Did that answer the question,
or does that make sense? Okay, great. I saw another question in the back.>>Will there be an L4 option for
under 500 [INAUDIBLE] contractors this calendar
year [INAUDIBLE]?>>So this is where I get to say,
I’m only responsible for security of the platform. I have nothing to do
with how you buy it? So I do not know the answer
to that question. And if anybody asks,
I’m also not a lawyer. So any law questions,
I’m going to punt those as well. We can find out for you. We’ll have the contract team. But I don’t happen to know. There’s another question.>>So 2 questions, how are you
in getting impact level 6.>>Yep.>>And in terms of
express route [INAUDIBLE]>>Okay.>>[INAUDIBLE]
>>Mm-hm.>>Public internet, can you
expand on that and how it works.>>Sure, so first I will plug another
session that’s coming up tomorrow. There’s a whole hour on exactly
how ExpressRoute works. Effectively, ExpressRoute
is a private connection. So we go through a carrier neutral
meet me points like Equinix facilities. We have connected our dark
favored network to that facility. You can connect directly
to us in that location through whatever network
you want to connect. There is no public
internet in between. So it definitely does not go
through the public internet. The L6 question is not something
I can answer in this form.>>Is there a concept of Windows 10 desktop as a service? I know right now there’s API’s,
I have options where I can do it. But there’s a competitor out there
that has something called workspaces that I wish we had
the same thing in Azure. Any plans for having a desktop
as a service on Azure?>>Got it, so the question is any
plans on having a desktop that is serviced? The honest answer is we
started down that path. And we decided this is one of
the things it’s better for us to partner. We don’t wanna try to make
everything ourselves, right? Do everything ourselves. I get the same question about,
are you ever gonna do a medical records database or
a document repository? There are some things that are just
not our core competency, and even though you would think Windows
is, Virtual Windows 10 is not. We decided instead to
just partner with Citrix. It is our most commonly
downloaded workload. Now, does that mean
the right business case and a different engineer we might
not do it in the future? Sure, but as far as I know there’s
nothing on the roadmap for the time being. Another question at the back.>>Do you have on
the roadmap any workloads or like that test ago for
non production in government?>>Okay, so the question is, do we have any questions about
lower cost workloads for dev tests? You can absolutely do dev
tests in the government. In terms of costing again, this is where I have nothing
to do with how we buy it. I dont know.
>>Stand up.>>[LAUGH] I thought I saw another
hand or two, anybody else? Okay, yes?>>[INAUDIBLE]
>>Right, okay.>>[INAUDIBLE]
>>Sure, so the question is, is the security
documentation only available for the government cloud or is available
for the commercial cloud as well? Our FedRAMP documentation
is available for the commercial cloud as well. There is blueprint documentation for
that, only for the services that are included in that scope and
its a relatively small scope. You could infer from our other
documentation that sort of stuff. But our goal is, we really think that government
customers should be using this government community cloud that
we’ve constructed for them. That there’s lots of good reasons
why that’s a better solution for government. So we’re not putting as
much effort into trying to provide other services
on the commercial. But, you can make some
inferences based upon what we have done in the government, because the code that runs
between them is exactly the same. Another question? I actually, I believe so, I know the recording,
I don’t know exactly how that works? I’ll check back and we can meet up after if you grab me
at the ask, this is so hard to say, ask the experts session,
later today, we can talk about that. Okay, one more, yeah.>>Yes, you said 60 days.>>Yeah.>>[INAUDIBLE]
>>Yes.>>How does, you said [INAUDIBLE]
>>Sure, so the reason why, so the question is, we went from
a year and a half to 60 days. How does that differentiate us
against the other competitors? The reason why it
used to be a year and a half is you would always add
them as an annual assessment. The government actually only has so
much appetite for taking new, so even if I wanted to do
a six-month cadence, FedRAMP won’t accept it because they
don’t have the capacity to take it. So we sort of solved the problem for
them. We said you need new features, You
need an agile way and BluePrint also started this way, is we have people
who wanna build SaaS’s on us. You don’t have the capacity
to do a full assessment, how can you do a partial assessment? We started it as
a partnership with FedRAMP. We built it together with FedRAMP. And then we’ve just evolved it and
taken in a much bigger space. We did the same thing where we’re
doing a partnership with them for our agile on boarding of
news services because what would really
happen is 18 months. So I deploy a service, now it has
to magically hit my annual window, so we’re getting a six
month lag there. I hit my annual window but really, let’s say I just submitted paperwork
to the government March 1st. Now, this is their new
accelerated pay plan, so hopefully I’ll get
it back this year. Previously I would submit paperwork, I would maybe get it back in 9
months, more like 12 months, sometimes 18 months before we
got an answer back on that. So we would actually lose six months
of time potentially any given year. What we have done is we’ve changed
things around, we’re gonna say, we do security the same
way every time. My method of access control,
my method of monitoring, my method of BCR, it’s the same for
all of my services. I don’t do snowflakes, as much as
possible, I don’t do snowflakes. We have a couple of weird things and then we force them into
the traditional pipeline. Everything else,
we do it standard and unfortunately the snowflakes,
they just have one or two things that are different
where everything else is the same. So, we have our external
auditor come out and certify the way that we do things,
we certify that approach and then I submit to both my external
auditor and the government evidence that this new service is
plugged into all those approaches. I give them the same sort of calls
that they would normally pull out to see yes, HD Insights is following
our formal access control process, is following our formal
monitoring process. Everything else that we do, that actually covers all
of our FedRAMP obligations. It’s about 20 independent
checks that we put every new service through. So the kind of thing that I do
before I submit something in an audit in the first place cuz I
don’t want to fail an audit ever. We just operationalize that and
then we hand it to them faster. So, I’m essentially giving
them my internal homework through a very rigorous process. It’s about a 25 page SOP and
what we’re really doing is saying, this service is not
a significant change. We are demonstrating that it is not
a significant change by providing all of this evidence and
having an attestation by our 3PAO. That’s how we get to 60 days. Now that process we
developed with them. They are publishing this month or
next month, they keep slipping their deadline, a formal process that
any cloud provider can use. So we developed something
that is vendor neutral, anybody can use the same process.>>So you could just [INAUDIBLE]?>>Exactly, yeah,
so the question is, because it’s not
a significant change, we don’t have to do a full security
assessment, that is correct. Because I’m showing that this
is actually just, it’s more like change management than it is like
building a new significant thing. Because I’m managing
it all the same way, then I’m able to move
it in an agile way. Move it in quickly. Yep, absolutely. All right, so let’s move
on to security philosophy. I hinted about this earlier. Most customers who came through us
early, the early adopters, were for one of these top three reasons,
right. They wanted to do a digital
transformation to make themselves different in their market. They just wanted agility. Dev test is a great
agility question. I wanna be able to just spin up and
spin down, and play, and I wanna see, well, what happens if I plug
in F5 versus a Barracuda in here? How does that affect my
application or change? You can do that all very easily. Or maybe I have demand
cycles that are uneven and I wanna be able to do spin up,
spin down as necessary, or we had customers who have just at
the end of a depreciation cycle and they didn’t wanna make another
five to seven year investment. They wanted to make
a one year investment. And they moved and
did a big lift in shift. We think though the actual number
one reason you should move to the cloud, and Gartner says by 2018,
the number one reason that anyone will be moving
into the cloud is security. There are things that all of the
Hyper scale providers are capable of doing that radically exceed
what’s possible on-premises. Because of the infrastructure and investment we’ve had to make to
make these global fleets and national sovereign clouds work
in any sort of sensible way. But on top of that, it’s really
that we have now the dollars and the expertise, and the consolidation
of talent, to make big changes. And we’re really kind of getting
pretty far ahead of the curve, and making significant security updates. We’re using the fact that
cloud is an inflection point to change how we approach not just
[INAUDIBLE] technology but security. So when you’re moving
into the cloud, your biggest challenge should be,
don’t do what you’ve always done. Use this as an opportunity to think
about how to do things in a new way. There’s lots of capability there and
we’ll dive in some of those. One of the things that we’ve
done and are working on doing is radically changing how you
control access for users. So when we started this cloud
orchestration it was just like the basic things that you would
do today or everybody’s done for the last 20 years for access control,
which was immensely painful. There’s no way we can manage
millions of hosts around the world, with admins logging in. So we looked into
a couple of things. How do we automate that? And then also,
how do we restrict access, right? Because we wanna get
a lot more fine grained. So middle piece of this story here. The way that we restrict access, what’s available today both
in the cloud and on-prem and for you as a customer,
is what we call Just in Time access. We’ve been running it internally for
a few years. None of my admins
have any privileges. They can’t do anything
at all by default. They must sign into the Just in
Time system and request access. In Azure it’s called Just in Time,
in Office it’s called Lockbox. So by default I can’t do anything. I want to perform an action. We make a call as to whether or not
that action requires, is allowed for automatic elevation. So I think that this is non
sensitive you can just go ahead and do it, or I want a second
set of eyes to say yes. So if it is something that we
considered to be privileged; anything that potentially exposes
customer data in our world is privileged, then we say actually we want
a second set of eyes to say yes. To do that in Azure it is
your dev ops manager or an on call member of your
team if it’s after hours. In Office you can actually, if you have the right license,
make yourself the second key. So if you think about
submarine movies, it takes two keys to
launch the missiles. I came into the financial world and I only ever had half
the combination to the safe. I’m still not sure to this day
why they gave the IT guy any part of the combination to the safe,
but I could only have half of it. I can’t get to the money by myself. Same idea. I do not want my admins getting to
the money, your data, by themselves. It requires a second set of eyes. Where on the case of Office, it
maybe require your eyes to say yes. You can do that today
through security workgroups, same sort of ideas, so if we’re
doing this directory federation, you can set up a series
of security workgroups. You can say hey, my Exchange Admin
can automatically elevate into the Exchange Workgroup,
because that’s their daily business. All right, I wanna go ahead and
let that happen. But maybe they can’t elevate
into modifying DLP rules. They can just do basic stuff. Anytime I’d modify my DLP
rules because that is so impactful to my business,
I always want two people to do it. Or, common scenario I ran into, I
also did ops IT for almost a decade. You are not normally the main
admin for something, but you are the back-up admin if
the main person is out, right. And so,
what do you do in that scenario? Do you give me permissions to the
things that I’m only a backup for all the time? Do you constantly turn up and turn down my permissions
when that person’s out? Is that gonna cause
latency issues or problems if I have to deal
with an incident immediately? Well in a Just in Time world you
can say I have permissions, but it requires a second
set of eyes to say yes. So I can elevate whenever needed,
whenever this business case justifies it, but somebody else
has to be in that approval chain. I can’t do it myself, so we’re really reducing that overall
insider thread sort of concern. You can also say hey, if you
access PI, or customer data, or anything that I’m concerned about,
you can do that. It’s both available on
Prem in Server 2016, and in Azure Active Directory Premium, which is in our commercial
environment today, and is coming in the government
environment very, very soon. You have a question?>>Yeah, so is that, I guess one of
the insider threat concerns that people might have with
moving to the cloud is, access to the key management
systems involved. Is that one of the mechanisms that’s
used internally for making sure that somebody who’s gottenaccess to
that is permitted and tolerated?>>Yep, so the question is, one of
the biggest concerns is access to some of the services like Key Vault,
which is coming up on my list here. A control for
keying material, right. So we would say keying material is
absolutely something we should be concerned about That that is
sensitive data, unquestionably, because it allows me to
bypass security features, and potentially access customer data,
even data that they believe is encrypted and offline for everybody
else, very, very sensitive. We would always require two keys
if you’re gonna be able to extract something that you could then abuse,
right. So if this is gonna increase your
capabilities, in an office world you can then make yourself
the second key n that scenario. So absolutely that’s part of
addressing the insider threat. We’re doing another
level farther though in Azure when it comes to
addressing insider threat, and that happens in the top
part of the slide. That is every action
that we currently do, we don’t do it like a normal
admin where we sign in or even using PowerShell scripting and
try to automate stuff. What we do is we have a bunch of
predefined code called workflows. They’re formal code has gone through
SCL just like any other part of Windows or Office or anything else. And it does discrete actions
like my workflow might reboot a server, right. Or it might extract a piece of data,
or it might, in some cases, grant me interactive logon onto
the underlying operating system. We classify all of those
as privileged or non. Every time we execute
a privileged workflow, we have a team of engineers that
looks at that and says, okay, well what was the end result? Why did we need to do
this in a privileged way? I’m gonna write a new workflow that
gets me to that same result without having to go through
a privileged mechanism. Our target is 12 to 18 months, we wanna be as close as
possible to zero humans having interactive access to our
production environment. So the real way we’re gonna
answer your scenario, is there just won’t be any
humans outside of incidence, even capable of accessing it. What we’re doing is
changing the world, where I have to trust that human to
do the right thing, because I’ve given them the capability to do
more than is necessarily required. I’m now actually giving them access
to a catalog of capabilities that are very discretely defined. That’s my capability
to achieve things. I’m doing it for a couple reasons,
one, it’s very, very expensive for me to screen, monitor and
train a large pool of people, I want that pool of people
to be small as possible, the only way I can do
that is with automation. Or being able to remove most
of these admins from privilege requirements, cuz they
can’t touch customer data. The other reason is 85 to 90% of my
threat landscape begins with humans. Most of it, not malicious, we’ve already pretty well dealt with
malicious through Just-in-Time. It’s really that people are just
bad at repeatable tasks. We’ll get in, we’ll make a tweak. We’ll forget to put
the firewall back up, we’ll forget to do something. We will click on the wrong thing and
get, we’ll get phished or we’ll get confused by an adversary. 90% of the threat vectors for our core adversaries
come through this space. And it’s one of these places where
they’re allowed to make small investments, and they force me to cost myself
a ton of money to combat it. We’re willing, as Microsoft, to make them big engineering
uplift to take that problem away. So if you think about it, everything your admin has due
today or might ever need to do, how long would it take you to
turn that into a software? We think that’s 12, 18 months for us given our engineering expertise,
and our problem set and what we’ve already done
over the last five years. Now, just like we made Just In Time
a thing that you could use, we used it for
ourself a bit before and then we said this is great let’s
turn it over to customers. You’d better believe once we get to
the point where we’re stable and happy this is coming
available to customers. What you’re gonna see is our
workflow engine will be available and published, you will be able
to do workflows rather than just security groups through your
Just In Time elevation. We will publish our catalogue
available so that you can use it. We will publish a scripting language
so that you can, we can use it. And we will curate an open source
community around these because the other thing that’s important is
we all need to remember that we’re all in this together. Obscurity, hiding how we do things,
is the opposite of security. Radical transparency’s how
I get social security. Cuz I’d much rather have the people
in this room pointing out my flaws, than my adversaries finding them. If I’m wrong,
if I’ve not done something well, my solution is not to hide that. That’s only ensuring that
the advanced persistent threats, who realistically have more funding
and more resources than even Microsoft, and
I’ve got thousands of engineers and I’m spending $1 billion
a year on this. Even I am smaller than
those organizations. Any federal agency is
definitely smaller than those organizations with
the potential exception of the DoD, they’re the number one
target organization. Microsoft is number two
following closely on their heels in capabilities as a result. We need to be able to share
information and work together as a community, because if we’re in
our silos, they’re always gonna win. But they don’t have
more resources and more intellectual capacity
than all of us combined. Once you start tipping
the tables that way, once you start getting radically
transparent, we are far in the lead. There’s a lot more
of us who wanna be not criminals than people who want
to be criminals in the world. And so we are also big
believers in transparency. And part of why we wanna
publish this and make it available to you is not just that
we’re doing the right thing for us. But, you are all gonna find problems
that we haven’t solved yet and make our own security better. At the same time you’re gonna
radically change your security. The biggest way the cloud
does this is changing what the security plane is. The way that we think about
approach to security. So traditionally, or for the last 20
to 30 years, the network plane or boundary has been my security plane. What I do is I build walls and
moats, and I build a castle, right, and then I put my data
inside that castle and hope that it’s all protected. And as a result, depending upon
how valuable that data is, I have to build this really
ridiculous castle every time. And I have to really control who
has access and it’s now very difficult to share information
within authorized communities. I have to replicate my castles
a bunch of times if I have different levels of security and data and
it becomes very complicated. In the cloud based world, using
data loss prevention technologies identity is my new place. So we actually don’t
really care very much about the location based stuff. In fact, I advocate,
when we do private sessions and engineering session with people. Everything you’re thinking about,
layer three down, OSI model. Forget about it, don’t do it there. There’s better places to do it, the only reason we did things
at those layers is because for 20 to 30 years that’s the only thing
you could guarantee would exist. Network was ubiquitous, therefore
Network was the security plan, but it’s a really terrible
security plan. You drown in useless data. You create single points of failure. You add latency and it was never
architected to be meaningful. It’s not designed as
a security layer. There’s lots of security
designed in the upper layers. Because cloud is one set of
infrastructure, beginning to end, you can guarantee what’s
going to be there. You can do more intense and more
meaningful monitoring up the stack. That’s what we do for ourselves
to keep ourselves from drowning. This is a fun little example, a couple of data points
along these lines. If Office 365 were to store and
maintain 90 days, and it has at various points to
meet regular security requirements, 90 days of net flow. That’s 10x the volume of customer
data in all over the clouds. How can we keep up with that? And certainly I can’t maintain
nine years of data because there’s just not enough storage
to make that economical. But if I’m only keeping 90 days, my advance persistent threats
are perfectly happy to be on 100 day cycle and I’m just not gonna
see the repeating pattern. So we have to get smarter about what
we’re monitoring so we can have less data, not drown in it, but still
keep it for a longer period of time. Now, that being said, in Azure, we consume seven trillion
events a day that we process. So another example with
the hyperscale cloud and the security advantages
that you get out of that. Imagine trying to build a scene
that can manage and tolerate and deal with all of that an out of
data flow that’s what we do for just Azure in our world process and manage all that using that
advanced threat analytics. And we’ve actually gotten smart
enough around threat analytics that we have our own intelligence
organization internally that fingerprints and identifies these
advance versions of threats. We have our own internal code names,
like helium. And we know the code
mistakes that they make, they get sloppy too just
like anybody else does. And we can construct fingerprinting
things, zero days, because we noticed these common mistakes
that are happening in codes, or they name processes all on the
same sort of predictable manner or they use the same set of passwords. Every single time in
certain sets of code and I can just find it immediately and
block it out. And in fact,
if you’re a Windows customer, every 30 days we pump all your
release where we force the operating system to drop all of those things
that we’ve detected to protect you automatically from all that,
we do that in the cloud as well. I have digressed away from
data loss prevention though.>>[LAUGH]
>>There’s too many cool things that I like to talk about. The advantage of data loss
prevention is I really get to the world where, I don’t care
where the information sits. I don’t care if it’s in my cloud,
if it’s in AWS, if it’s on-prem, if it’s on this laptop,
if it’s on this phone, I can protect it all the same. I don’t have to care about
the location of the information. I care about the location or
the information itself. I do that by assigning tags,
classified and label all of my data,
then because that tagging engine is tied back into my right
permissioning engine. I can control the behavior of that
tag resource, however I want to. So I can do things like say,
obvious things who can access it, should it be encrypted,
can it be sent to an external party? Then I can do less obvious things,
like cannot delete. I wanna assign a legal hold flag. I can say, actually,
once this flag is assigned, you cannot delete this file,
even if you’re authorized otherwise, because I might have legal
action pending here. You can do things
like say cannot copy. You can start to get really
more fine grained and say I wanna control, I don’t wanna
have to create security groups around to control things. But I can tag some things if
I have multiple nation data. So I’m working with
NATO as an example. We have lots of militaries
they like to share data, or they try to share data but they all
tag and classify it differently. I can have the UK tag a file or let’s say the US military is gonna
tag a file as national security sensitive, but controlled
on classified information. NSSCUY, and then pass that
over the UK currently, they level that up to what
they call privileged, but when they send it back to me,
well they’re privileged. Now, it ranks against as my secret. So data that was unclassified
by just passing twice has become classified
in my environment. I’ve created lots of problems. Instead, what I can do is just
apply an NSS CUI flag to that data using DLP. If we’re all part of a shared engine
they can then have their rules, their identity stack determine or with the shared identity
stack determines. What their users need to have in
terms of priveleges to be able to read that data. I don’t have to reclassify it,
I don’t have to retag it and approach it and
I can share information. Or let’s say for
example I have a bunch of data and some of it’s PII and
some of it’s not PI. Some of my users are authorized
to CPI, some are not. I can tag data to control who can
access it, what can happen with it, must it be encrypted. But not have to restrict total
availability and capability. It’s a pretty radical change and
it’s a big intellectual uplift but it’s a dramatically
powerful thing to do. But it means that we’re gonna
be able to do two things. One, you’re gonna be able to get
a lot more efficient with your data, sharing data becomes easy. Two, you don’t have to
duplicate architecture. The example I like to give is let’s
say we have a SharePoint site, it’s 2,000 daily users in it. They use it for their mission every
single day, and then we do get sued. And there’s a legal hold on
some of the data in here. Now depending upon my chain of
custody events either I have to move those files to a new location,
so I’m duplicating architecture. Or worse yet if chain of custody
won’t allow me to move them. I have to lock all the rest of the
users who need this data out from the entire thing or move the rest
of the data into a new location. So I’m again duplicating
architecture but in a DLP world, I’d just assign a legal hold flag. I can make it not erase. I can control who has access to it. I can make it not even visible
in the SharePoint site. I didn’t have to change anything. I didn’t have to
duplicate architecture. I didn’t have to interrupt the way
that all the rest of these users are operating. But I have achieved
that level of security. Which means the other
thing we can do. And the other advantage
of database security, as opposed to our data
tied to security. Cuz I don’t want to make
it sound like a database, data-tied security versus
location-tied security. Is that we tend to over
secure our information. We secure things up to whatever
we think the highest level is. But we are probably not treating our
most secure data at the appropriate level as well as we should. Because we’ve made it so difficult to interact with that we
have to walk this fine balance. Between how difficult is it to
use versus how much do we want to protect it. In a DLP world, I can reduce the
barrier to use pretty dramatically. Which then allows me to scale
up the barrier for the last few things that I really care about and
allow me to encrypt it. So let’s give an example here. Using the rights management
service in holder of key. There’s some subset of data in
your cloud instance that you care really about. That you don’t even want Microsoft
to be able to read it, right? You want to encrypt it
using a key that you own, you’re gonna store it in the cloud. You’re fine with KeyVault and
other things in general. But for this subset of data it’s so
impactful, so meaningful or so regulatory required, you don’t
want me to have access to it. So instead, you extend using our
rights management tools on-prem. Use a certificate authority that you
have on-prem to do the key material. So that we create a secure tunnel. We send the data to your on
prem rights management service. You encrypt it there. I don’t get to see the key. It comes back up to the cloud. It’s now encrypted. Now nothing else can see it
unless you bring it down and then put it back. So that whole bring it down, put it
back thing has added a lot of cost. We don’t want to do
that all of the time, but there are certain pieces of data
for which we should want to do it. And we probably don’t do it
currently in existing location-based architectures because
it’s just too cumbersome. DLP makes it easy. Yes, question in the back?>>[INAUDIBLE]>>Okay, so the question is in the case of ransomware and
you’re using this type of strategy how does the bringing it back and
forth affect it? The answer is really gonna be
specific to how that individual piece of ransomware functions and
how it’s locked it. So in this scenario,
what would have to happen for a ransomware to
probably be effective. Is that ransomware would have to
be in your on prem architecture, somewhere that it’s
capable of capturing that. If the ransomware were in fact
on the overall infrastructure. I mean, I guess the actual answer
is that this not gonna have any effect on how ransomware
functions, one way or the other. It’s a different sort of strategy,
and we’d have to look through your exact scenario to figure out how
we can protect against that. That’s part of where our
threat analytics and everything else that we’re
doing from an investment and security sampling happens. But DLP is not really
gonna change that. Because if they then encrypt your
data and hold the key from you, that’s still happened right? The advantages of the cloud
is if you do replication, we’ve got the three rights and
the [INAUDIBLE] geographic re-write. Now they are having to steal or they
are having their encrypt multiple copies and prevent access to it. It’s gonna be a lot harder to do a
ransomware attack against the cloud. Than it is against an on
prem architecture, both in how we control access. The resources that are capable and
the things that you can do. You [INAUDIBLE] back in
infrastructure are much more limited than you could do in
a traditional architecture. But the DLP uniquely is not really
gonna have a specific effect. Any other questions
about DLP technology? I mean, I think the other thing
that’s really meaningful here. And the reason why I state this is a
big intellectual uplift in addition to a mindset change
from location to data. It also requires you to
put a lot of time and effort in developing a good
data classifications standard. You have to create something that
is agile, and scalable, and usable. You don’t wanna create just three
layers because you’re gonna over protect data. But you also don’t wanna create 176
different categories because then your users will never be able
to figure out how to use it. And they will start
misapplying data. And probably one of the worst thing
you could have happen is you start radically misclassifying data. That’s worse than not
classifying it almost. I kind of make it akin to
the early days of database design. Databases are incredibly powerful. But if you don’t design
your schema well, that database immediately
becomes a problem. You start overloading terms, you
start missing things and it breaks. Your data classification
hierarchy in a DLP world is gonna be the same thing. And then you’re also, when you’re
thinking about writing applications, so that session we did earlier. You need to start thinking about, how am I going to use my
application to use the DLP engines. To affect this access sort
of based security, so that my application can
take advantage of it? So when Office wrote all of its
things around how email functions and SharePoint functions,
they had to take into account. You’ll have to do the same
thing from your perspective. Yep.>>[INAUDIBLE] your last statement. As far as the VLP is concerned, if you have VLP engine by somehow
on the territory, on your premises. How you can utilize or leverage the VLP on the Azure? As far as the flow of
the traffic [INAUDIBLE] as well. You wanna get the TII [INAUDIBLE].>>Right so the question is,
if I already have my DLP engine, [LAUGH] what do I do to
deal with this now, right? Because what we’re really describing
is we have our own DLP engines that are running. We tied our DLP engines
to our Identity Stack, because identity is ubiquitous. It’s the thing that
touches everything and so that’s why we tied it there. It’s gonna become an interesting
question of the ability to tie in. And it’s gonna be a one off scenario
that we have to do some deeper review. As to whether or not whatever product you have
on-prem and you’re thinking about. Can then tie back in or
can we talk between the two or not. And the answer in some
cases is gonna be yes, in some cases it’s gonna be no. It depends on how those
things are architected, how open they are with
their architecture. In general we try to do open
architecture everywhere we can and so, for example,
our identity stack runs XAML 2.0. Anything that can speak to that
can speak to our identity stack. You can use things
ubiquitously as a result. OMS, same sort of an idea, we’re trying to use standard
open as much as possible. Our scripting language is standard
open as much as possible. But DLP technologies, I can’t
guarantee that your third party’s following that same philosophy.>>[INAUDIBLE]
>>Yeah, exactly. [LAUGH]
>>[INAUDIBLE]>>Yeah, yeah, well, so SharePoint 2002,
that’s more than ten years ago. That was old Microsoft. [LAUGH] That’s a different company. And one more, I saw another
question in the back maybe. Or I’m just getting blinded by
the light, revved up like a deuce, another roller in the night. Secrets Management, so
last thing I wanna talk about. This is one of the last
things I want to talk about. This is a big deal for maintenance
of security in the cloud. In fact, I mean these are literally
the keys to the kingdom. How do you maintain them? How do you deal with them? We designed an internal service
that we call Secret Store. Because there is no way
we can manage our fleet. By having admins log into
a traditional password book and check credentials in and out. That’s just not gong
to work at our scale. It’s not possible. So instead we designed some software
that was going to create a vault that would do all of that for us. And not show us what it was doing,
which was the magic of Secret Store, that became Key Vault. So just like we did Just in Time,
we started with us. We made it available to you. We’re doing the same thing here,
we have Secret Store internally. It’s Key Vault,
it’s available for you. The vault is software. That software stores your keys. It’s backed up by HSMs. So that when you create
keying material, there is a physical HSM
that sits behind it. It can do that but
you can store any secret in there. So you can load your own keys
in there if you wanted to. You can load,
we do service accounts. Service account
credentials in there, things that we treat
more like secrets. You could do symmetric encryption
keys, anything that you wanna do, any sort of secret you wanna store. You can load in there. The value is that humans
don’t have to touch it, and your code doesn’t have to touch it. Instead, what you do, is you make an API call, or your
app makes an API call to the vault. Say it initially wants
to create some PKI. Makes an API call to the vault and
says, hey, I need some PKI, and
the vault says, no problem. Creates it using the HSM, gives me an API back that I can
then use to call that crypto. But it doesn’t show me the key. I don’t get to know what the key is. I don’t get to know
what that material is. Then, I can set up my
app to then say okay, I’m gonna use that
key that I created. I’m gonna open a session, I’m gonna transfer some
data over to the vault. The vault’s gonna encrypt it, and then we’re gonna write it at
secure at rest over here. So I don’t ever get to see it,
right? So, in that scenario,
my application can be compromised. And you could take
over my application. They never see the keying material. Which means they’re not gonna be
able to decrypt this encrypted at rest data,
cuz they don’t have access to it. The only thing they’re gonna be able
to do is try to pull it all back live through the application. Right, so I’m gonna try to do that. We have solutions coming, they’re probably five years out,
that will fix even that. Anybody’s familiar with
Secure Enclaves and some of Intel’s papers
along those lines, we can have chats afterward about
how that’s fun and exciting. But not for today because
it’s a bit down the road. But in general what we’re seeing
is they don’t have access to it. Same idea for service accounts. Right now, if you compromise my
app and my app knows what that service account is because
I’ve put in a code or it actually logs in as it,
in some way, shape, or form. Then the adversary can try
to log in as that service. Now, good hygiene practices to make
all service accounts non-human interactable, but we forget from
time to time, admins make mistakes. We flag it on for a minute and
forget to turn it back off. These are all common
problems that happen. And so
an adversary might try to use that. They’ve compromised my app. They have limited privileges. They use this service account to now
gain a lot more privileges than they previously had. And then replicate that out
until they have everything that they need, right? So if they can’t see
the credentials, they can only request through the
vault that a session be created for them, they can’t actually now
try to break out of context. It’s much more difficult for them
to take over and take advantage. The last thing I wanted to talk
about was our big threat analytics, or second to last thing I wanted to
talk about are big threat analytics and where we’re moving
from that perspective. We already kinda got there with the talk about our threats
typing and things. You can get this, coming soon
in the government environments Azure Security Center, and
get the straight analytics. We’re actually moving to
integrate this all into OMS. OMS is is fully available in
the Government environment, where you’re gonna get
our threat analytics, the stuff that we’re doing
against our fabric layer, where we’re watching for
attacks against our infrastructure. Available for you for
your own subscriptions and then available to your things
that aren’t even in Azure. So you can get all that threat
telemetry, you can start seeing, hey, is this Helium
banging on my front door? Get all that same data, part of
the things that Microsoft is doing. The last thing I wanted to get to, cuz I did mention
philosophy a little bit. And some of this is
sort of trending. We have an overall philosophy
that we call Achieving Parity, or that I call Achieving Parity,
in forcing inside. The general theory there is that we
should treat all cybersecurity like a distribute denial
of service attack. If you think about early days of
denial of service, we would have this problem where you could make
a ping, send me a very small resource request, and I would do a
bunch of things as a result, right? And so we ended up having
to shift our code around. And at first,
that was not that big a deal, because I had more infrastructure
than my adversaries. They needed to get a bunch of
infrastructure they could try to ping me to death. And so I just made my code
a little bit better so that there wasn’t as much in parity. Well then botnets happen and
all of a sudden, whoops! Our infrastructures are the same or
their infrastructure is bigger. And so now I had to get
a lot better about my code. And so we’re at a point now where
when you, any resource you have to do to spend, or sorry, any resource
I have to spend as a processing you have to spend as a request for
that processing. It’s effectively good
denial of service design. We wanna do the exact same thing for
everywhere else in cyber security. Cuz right now we have
this asymmetric problem. Our adversaries can make
small investments and I spend $1 billion a year,
that’s a problem. It is not sustainable especially
when the Russian diaspora and nation states have hundreds of
millions of dollars to spend. We are not in
a sustainable war there. It’s why for the last probably 10 to 15 we’ve
been in a tactical catch up game. We’re playing chess,
we have to win every single time. One of the big ways we’re
making an investment here is around that idea of changing
the identity stack and changing how you get access, and
removing humans from the equation. Because now in our world
through work flows, if we end up with
a catalogue of say, 10,000 work flows I haven’t solved
the problem completely, right? Cuz I still have to be be right
10,000 times the code of those work flows has to be good, the interaction among
them has to be good. But I’ve shifted from an infant
space, to a contain space, so I have to be right 10,000 times,
they can only be right 10,000 times. I’ve limited the capability. We’re now approaching
parity of investment. We still both have to pay attention,
but I’ve limited the scope. I’ve said hey, actually you
guys like fighting over here. This is where you’re doing this,
where you want. No, you have to come to me. You can still attack me,
but you have to come to me, you have to do it on my terms. We wanna become strategic,
not tactical. We’re using the power of the cloud,
the power of our problem space managing the size of infrastructure,
and the engineering might of the largest
software company on the planet to do some pretty
significant investments. We did that on monitoring,
we’re doing it around access. We’re looking at every place
where were seeing common vectors that allow our adversaries to
make small investments and we currently spend big investments. And we’re deciding,
the reason I joined Microsoft, I switched away from being
a consultant a year and a half ago was to use that power
to make a dramatic change. And we think this is
really the approach. This is the difference. We want to think about all of cyber
security like it’s a distributed denial of service. We want to do the same
push that we did for that technology in
everything that we do. So that we can get strategically
ahead of the curve for the first time. And then do our best to stay
there and on top of that be radically transparent about
how exactly we’re doing it, so that we have for
the first time in about, five to ten years, a resource
advantage once again, right? So we make it equal resources and
then I once again, have more resources than they do. It becomes a lot harder for
us to bcome compromised. Any, I want to save
the rest of the time, I think we have ten
minutes left officially. For questions and then I’ll be
around the rest of the day and tomorrow for
anybody who wants to followup. Go ahead.>>You’ve mentioned several times
removing humans from the equation [INAUDIBLE] just wondering if you’re
against what I do for a living or?>>[LAUGH]
>>[LAUGH]>>No, absolutely not. And we still have admins as well, right? So the admins have to be there. The difference is I don’t want the admins in and out.
I did ops IT for nine years. I feel your pain, let me put it->>I have an answer for you, both of you.
>>I have an answer as well cuz I hear this a lot from CIOs
who are concerned about. They’re either excited, the’re like, moving to my cloud means I can
fire my whole IT team, right? And we’re like, no. Or that moving to the cloud means
I’m gonna lose my whole IT team. No, the difference is you go
from keeping the lights on to providing mission based value. Your job gets better and more
interesting cuz you move from ops to dev ops and dev ops is more fun. I promise you,
it’s more interesting. That you get to do a better job. And honestly it’s one of the these
things where we’re wasting a lot of potential intellectual capacity. My grandparents are farmers. I grew up in Kansas during farming
before the green revolution two-thirds of the population in
the United States was farming. That was a lot of wasted
intellectual capacity. We’re doing the same thing here. The cloud is the capability to allow
everyone to focus more back on their core mission,
including your internal IT staff. They can now have think about how
can I make the business better not just how do I make sure that
people aren’t screaming because they can’t check their email? Those are different worlds, those
are different value propositions. If anything,
it makes IT more valuable to an internal organization, because
you are now tied as a productive part of the mission,
not a necessary expense. I have to tell you,
I hated being the necessary expense. I felt like I was
the technical janitor, right? Yeah, you don’t want
to be in that world. You want to be in the world that’s
producing something, right? That’s how you get there
through the cloud, by taking your eye off the ball that
other companies do well for you, and focusing your stuff back
on the core mission of your organization that you’re
gonna do better than we will. Which is why we’re not
doing some other things, like developing enterprise
healthcare solution. Or, apparently we’re not
doing desktop in the cloud.>>[LAUGH]
>>Next, one way in the back, under the light.>>Have you not seen any
of the Terminator movies?>>[LAUGH] Right, the question
is have I not seen any of the Terminator movies
>>There is still a human in our equation, right? We’re not talking about using
cognitive services or AI to do this. This is not self-healing. It is removing, we’re really using
software as a middle layer to change the threat landscape. We’re changing threat surface.>>[INAUDIBLE]
>>Right, so are we gonna see Skynet reborn? We have no intention of designing
Skynets, I mean, I’m not in Microsoft Research, I don’t know
what they’re really thinking. But we don’t, the AI, all of the AI
tools anywhere on the planet we’re a long way away from
anything like that.>>[INAUDIBLE]
>>[LAUGH] Still the biggest threat? Maybe, maybe. All right, well thank you. You have been a wonderful audience.>>[APPLAUSE]
>>I appreciate it.

Posts created 22900

Leave a Reply

Your email address will not be published. Required fields are marked *

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top