Tuesday 6 July 2021

Configuring PIM access policies for Azure resources

On my previous post I described how you can onboard an Azure subscription or management group to Azure PIM (Privileged Identity Management) so that you start creating conditional access for Azure resources. In this post I am going to go through an example on how to control access to a particular resource.

Pre-requisites

You will need to have onboarded the subscription or management group that contains the resources that you wish to configure access. You will need “Owner” or “User Access Administrator” role permissions on the resource(s) that you wish to configure. 

  • Under “Azure AD Privileged Identity Management” blade click “Azure Resources”
  • By default, you will only see the subscriptions and if you wish to assign permissions at other levels then you will need to select filter “Resource Type” Please note ensure you have sufficient privileges to see the resources (Need to have “owner” or “User Access Administrator” roles)

  • Once you selected filter "Resource Type" you have options to select which resource type to show

  • As we selected the filter to just show resource, resource group and subscription you can now see all the resources listed based on the permissions I have. We will select “rg-ppe” resource group to be configured

  • The overview page shows some general statistics about any PIM activities, who might have activated a role at “rg-ppe” resource group, etc. To start configuring PIM for this resource group, select “Assignments”
  • The default view will show if you have any “eligible” assignments that have already been configured at this level or inherited. As we are going to create a new assignment we will select “Add assignments”

  • On “Add Assignment” screen, check under Membership to confirm you have selected the right resources and resource type. Now under “Select role” you will decide which built-in Azure or custom role you would like to assign for this resource group

  • I have selected the “reader” role. Next under “select member(s)” click “no member selected”. Add the users or group that you would like to be eligible for this role

  • Check you have selected the right role and member(s) for this role then “Next” to continue

  • Select if you would like the role to be “eligible” or “active”. Eligible means members of this role have to perform one or more actions before they can use the role. An example could be they are required to use MFA or provide a ticket number. Active means members of this role do not need to perform any actions and are always assigned this role. We will be selecting “eligible” and with this assignment type you have an option to adjust the start and end date/time for this role. Example, a new contractor has been hired for six months (Jan-June). The contractor is expected to do some work on the resources under our “rg-ppe” resource group between Mar-Apr. We can adjust the start and end date to be between Jan-June or we can be more specific and have it start Mar-Apr. By further restricting it the contractor will only see it available during the assignment time/date. We will leave it as default of a year and click “Assign”


  • Under “Eligible assignments” you should see the “Reader” role and the users and groups we have assigned for this role. Right now, the role will have the default access control to activate the role. To make changes we need to the access policy click on “settings”


  • On the settings screen search for the role that you would like to change. If under “Modified” column you see “Yes” it means the default settings has been modified already. We are going to select “Reader” as this is the one we are working on for this example
  • First check that we are modifying the right resource and role at the top. This page will show the current settings and to edit click on “edit” 

  • Under “Activation” tab we will make a change to “activation maximum hour” which at default is set to 8 hours. I will change it to “1”. Select “Update” but if there are other settings then click the other two tabs (Assignment or Notification)

We have now finished configuring a privilege access policy for our resource group "rg-ppe" so we would now need to login with a user that was assigned this particular policy.

  • I will login with the user “Yuna” and navigate to Privileged Identity Management page and select “My roles”

  • By default it will be on Azure AD Roles so click on “Azure Resources”
  • You will see under “Eligible” what roles you have been assigned, which resource the role has been set at and the end time. Click “Activate” to start the process

  • You will now see the maximum hour that I can select is "1" and I have to state a reason why I want to activate the role. Once some text has been entered click “Activate”

  • Wait for the role to be activated which you can see is a three-stage process. Once completed the browser should refresh but you may need your credentials again if prompted

  • Screen will refresh back to "Eligible assignments" tab. Click on "Active assignments" to see that you role is active and you can see end time and a option to "Deactivate" the role before the end time
You have now configured privilege access for a specific Azure resource and there are many more options that you can configure. For example requiring MFA or additional users to approve the role before it can be used. You can also configure notification settings so that you get notified if someone has activated the role. This is a great additional feature to be used if you have Azure AD Premium P2 license to further enhance your Azure resource access.

Wednesday 30 June 2021

On board Azure Resources to use Azure AD Privileged Identity Management (PIM)

If you have Azure AD Premium P2 licences one of the reasons would of been to use Privileged Identity Management (PIM) as its a great tool to help provide "just-in-time" privileged access for resources where you don't need permanent access to. 

In this article I will be going through how to onboard Azure resources into PIM so that you can control privileged access for your Azure resources as well. This means you can create conditional access policies for certain resources, resource groups, subscriptions or even management groups to ensure users only have the required permissions at the right time. 

An example would be, by default you assign reader role for IT operations staff so that they can see all the resources. If they decided they need to make a change they would need to use PIM to activate a particular role you have assigned them which gives them permissions to make the change. As part of activating the role you might want to add some conditions. You might add that users need to use mutli-factor authentication, include a ticket number, require approval and limited the maximum amount of time the role can be activated for.

Below are the steps to get started to the journey...

There are some pre-requisites to start with

  • Azure AD Premium P2 license
  • You will need “Owner” or “User Access Administrator” role on the Azure resources that you wish to on-board to PIM.
Please note, once you have on boarded a management group or subscription to be managed you can not unmanage it. This is to prevent another resource administrator from removing Privileged Identity Management settings. The only way you can unmanage it is to delete the management group or subscription.

  • Login into https://portal.azure.com. Use the search bar to find "Azure AD Privileged Identity Management"
  • Under "Privileged Identity Management" blade click "Azure Resources"

  • On this screen if you or someone has already onboarded some Azure resources you will be able to see it here. Please remember - You may not see some resource either if you don’t have the correct permissions for those Azure resources. Click on "Resource type" or “Directory” filter to provide you more options to see what resources have been onboarded. You may need to click on “Refresh” to ensure the content is refreshed as there has been a few times where the screen doesn’t seem to automatically refresh

  • If the resource has not been onboarded yet, then click on “Discover resources”

  • By default, this screen shows resource state “Unmanaged” and resource type of “Subscription”. As you can onboard subscriptions or management group you will need to change the filter so that you can see all the management group or subscriptions that have not been onboarded. Again remember to select “refresh” so that the resource screen refreshes. The screen below has been selected to show “All” resource type

  • We are going select the resource we would like to onboard. For this demo we are selecting our “Free Trial” subscription. Once you have selected the resource/s that are to be managed by PIM you will be able to click “Manage resource”

  • A warning message will appear highlighting that all child objects of the resource will be managed by PIM. For example, for a management group the possible child objects would be management group, subscription, resource group and resource. For a subscription the possible child objects would be resource group and resource. Click “OK” to continue

  • On the top right of the screen if you click on the “bell” icon you should see the task of resource being onboarding

  • Once the task has been completed you will see on the screen that the “unmanaged” resource is not listed there anymore. You will need to click on “Privileged Identity Management” to go back to Azure resources screen

  • On the screen below you can see “Free Trial” subscription has successfully onboarded to PIM for you to start configuring roles to be controlled by PIM

We have now onboarded our "free trial" subscription to our Privileged Identity Management services which means we can start configuring just-in-time privileged access to Azure resources. In my next article I will describe how configure access to specific resources. 


Thursday 6 May 2021

Azure Resource Naming Convention

One of the key governance when deploying resources to cloud is to have a good naming convention. A good naming convention should help people quickly identify what the resource is and any relevant information that could prove useful, for example, location(uks, ne) or environment(prod, preprod, etc). You might see a VM resource reporting unhealthy and just by the name you are able to identify the location it is running from and which environment without needing to query for more information. People can then make a quick judgement call if they need to priorities to fix this particular VM or it can be dealt with later.

So how do you start and any good framework to work against? In Azure world Microsoft has create tons of material called "Cloud Adoption Framework" CAF where you will find best practices, guidance and tools from Microsoft to get you going. https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/

Below are a few articles that I have drawn out from CAF which I have use often to give me ideas for naming conventions.

A quick article to give you an idea of how you might want to define overall the naming convention for your resources in Azure with some examples.

https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/resource-naming

The following article gives you some ideas on the recommended abbreviations for various Azure resource types if you can't think of any

https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/resource-abbreviations

This final article is important as it highlights the naming rule and restrictions for Azure resources. The important column to look out for is "Scope". There are three possible scope at present and they are;

  • Global - This means this resource name has to be unique across the Azure platform as it is expose out as a possible public endpoint. An example would be Storage Account Name. You may never expose your storage account publicly but because you have an option to expose it, hence at creation time this has to be unique.
  • Resource Group - This means the resource name within the resource group for a resource type has to be unique. For example you can not have two managed disk with the same name but you could have a network interface and a managed disk with the same name as they are two different resource type.

  • Resource Attribute - This means this has to be unique to the parent resource. An example with file services shares within storage account has to be unique. You will not be able to have two file share name the same under the same storage account.

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules

As always these documentation are just best practices and you should adapt it to suit your needs within your organisation. One key takeaway is to make sure the naming convention is documented and kept up to date where possible. As you start to automate processes or builds you will find that with a good naming convention you will be able to make your scripts more generic. 

For example, you wanted to know how many disk are OS disks. If you had followed the recommended best practices then any disk that you have deployed and is an attached as an OS Disk for a VM would have the words "OS Disk". Of course you could still find out if a disk is an OS disk by querying each VM and getting the OS disk attribute but what if you just had a bunch of disk not attached to a VM or just did a restore of disks? it would be very hard to identify and your script would be time consuming to write and run.


Sunday 28 February 2021

Align your managed disk to Microsoft standard tiering to optimise your spending

“Managed” disk has been around in Azure for a few years now and is the standard for deploying with VMs. I started deploying VMs when “unmanaged” disk was the standard and you had to plan carefully how many disk/s, IOPS you needed and strategically place those disks in the right storage account to ensure that it can deliver those performances. We were using spreadsheets to help us track which disk was in which storage account and making sure we weren’t hitting the max IOPS limits that the storage account could delivery. Over time it just got very very messy and complex to manage. 

With the introduction of “managed” disk it took all those pains away and Azure was dealing with them in the backend. We could change from SSD to HDD and vice versa easily, snapshots were just a few clicks, we didn’t need to search in storage accounts for orphaned disk where we were paying for because we forgot to decommission them as part of the VM. With managed disk we just had to work out how much disk space we needed, the amount of IOPS needed and then select the disk that matches it as closely as you can to your requirement. 

Previously for standard HDD, Azure would be charging you based on the amount of data you were writing to disk and not the actual size you have assigned to the disk. For example, if you had a 100GiB disk and had only 40GiB data written then you would be charged for just 40GiB. If your disk was SSD then you would be charged overall on the disk size you have assigned, so with the example above you would be charged 100GiB even thought you have written just 40GiB of data. So, if you were using standard HDD this pricing method worked well as you could provision the disk size to be quite big and not worry about the cost just like how you would do it on premise on systems like VMware. With SSD you had to be careful as it was size dependent and not based on consumption of space.

Microsoft have moved to a standard tiering cost for a while now across all type of disk that they offer. Below is a sample of the fixed disk sizes and the tiering name for each type of managed disk. 

From the table you can see that disk sizes are not the usual sizes that we might assign to our VM (10GiB ,50GiB ,100GiB, etc) but they do seem to align to the physical disk sizes that you would normally buy for SSD especially from sizes 128GiB sizes upwards which maybe a coincident. With this standard tiering you would assume that if you were to create a new disk that you wouldn’t have a choice to create your own custom size but you are allowed to. I am assuming Azure allows you to do it because there could be some systems that have to be specific disk sizes? 

Microsoft highlights “Azure Managed Disks are priced to the closest tier that accommodates the specific disk size and are billed on an hourly basis.”. As an example, if your SSD had 12GiB data written and your disk size was 20GiB then you would be charged at 32GiB.

I have included the cost of 16GiB as my example stated that it had only 12GiB data written which I wanted to highlight the cost difference if the disk was sized to 16GiB instead. You can see that you would be paying around 54% more per month if stuck with the disk size being 20GiB. Imagine we had 100 of these what would the cost be and how much we would save per year?

From the table you can see that we could save around £2,210 if we moved to the 16GiB which is quite a substantial amount over the year.

So why did I bring this topic up?

I started my journey in to Azure on a project by performing a “lift and shift” phase where we were just migrating our existing workload into Azure. As it is the first phase you would want to match the performance for your on-premise VM to be the same as in Azure such as CPU, Memory, Disk Size, Disk IOPS. During this phase if you couldn’t find the right size match then you would most likely size up. As I stated before when I first started this journey managed disk was not introduced so we sized disk exactly as what they were on-premise. We didn’t want to do any optimisation at this stage as we wanted users and key business stakeholders to build up their confidence with cloud.

As most of our workload was migrated from on-premise, we were always “over-provisioning” the disk size as we knew that our storage systems (SAN/NAS) would do it’s magic like thin-provisioning, deduplication, compression to ensure that we got value from our SAN/NAS to store more data. We would never really need to think about performance as overall the system would balance out over time with the type of workload that we were running. As we purchased the system already, the more we can utilise the storage the better return of investment this becomes and as the cost is shared across all the application/systems this will get cheaper over time.

We started to work through our disks sizes with service/application owners to see whether the migrated disk could be aligned to those particular tiering Microsoft was offering to try and save some money or in other words optimise our spend and add value for money. You could see from the above example where a disk has been sized as 20GiB but using just 12GiB of space could be sized at the P3 (16GiB). Obviously, there are some key factors we have to take into consideration before just steam rolling ahead with the disk work:
  • There is no way to be able to resize a disk to be smaller without using a third-party tool so you would need a longer downtime to perform this action and possibly purchasing a tool. Another option would be to provision a new disk to the size you want and copy the data over but again this would require a longer downtime. With OS disk this get more complicated so you really have to think if it is worth the effort especially if the server will be decommissioned soon.
  • Although you might want to size down the disk but each disk size offering has a maximum IOPS and throughput it can deliver so you will need to review the performance metrics of the disk to ensure that sizing down won’t hamper the performance of the disk for the server. If you were cross charging departments, they will need to understand why you sized a disk at 512GiB instead of 256GiB as requested by them. They need to understand that only 512GiB disk could deliver the required IOPS or throughput.
  • Size up but why? for example a disk was sized at 100GiB and had already used around 80GiB, with Microsoft statement “Azure Managed Disks are priced to the closest tier that accommodates the specific disk size and are billed on an hourly basis” it meant we were already paying the high tier(128GiB) so why not just make the disk space available for the VM to use as you are already paying for it
  • Most of the disk sizes provisioned on-premise was sized for growth over a period of 3-5 years. As we were resizing some of the disk smaller the show back cost appeared lower and departments were thinking great, we saving money BUT you need to let them know that the cost will go up as the data grows and the disk size will change. You are only trying to optimise the overall cost of running the server over a period of time. You need to reminder them that they can’t cut the budget as it will be needed eventually if the data does grow!!
By aligning to Microsoft size tiering, we were able to start to standardise our disk offering and be able to give costing more easily and accurately. Service/Application owners were starting to think more carefully on the disk size requirements and IOPS/throughput as it now affected the overall cost of their server over x number of years. Any disk size increases were always doubled checked to ensure that anything that could be deleted should be deleted first before increasing. 

One thing to be clear I think is that you are looking to optimise your spent and that there is no guarantee that you will save money. What you’re hoping to get out at the end of this exercise is that you are not overspending when you either don’t need the capacity or performance of the disk from day one. A simple example would be, a system expects year on year-on-year growth data over the next five years and by year 5 it will be 256GiB. On a traditional SAN/NAS system you may have “thin provisioned” the disk so that you have set a maximum that you will allow it to grow and when you come to upgrade/refresh the SAN/NAS you know that you need to buy capacity to cater for that growth. But for cloud if we were to provisioned that space upfront then we end up paying a lot for capacity that we have no use.  

As you can see from the table if I was to go up a disk tier each year against provisioning the final state at day 0 you would save around £1,019.52 over 5 years.
Go ahead and have a look at your disk sizes today and see if you can optimise your spending in Azure. With Microsoft now starting to offer 1TB+ disk reservation I don’t think it will be long where smaller disk sizes will have disk reservation too. 

Using Windows Terminal and customising it

As a system administrator, you will most likely have multiple command line terminal tools that you use to help manage your systems. By the end of the day, you will end up with a screen full of terminals like the one below 
You can see that there are multiple windows for PowerShell, putty, cmd. It gets messy and can you get lost over time which window is for which especially if you work in multiple environments i.e., production, pre-production etc.

Here comes……. Windows Terminal from Microsoft which is an open-source project and you can contribute to it too if you fancy. It’s an application where you are able to open multi shells and keep them as tabs instead of multiple windows. You can think of it like your web browser where you have multi tabs and each tab is at a different location/page. You can download this application from Microsoft store and the system requirements are simple - Windows 10 version 18362.0 or higher

Once installed upon your first launch you will see that it defaults to PowerShell. You can see a “+” icon and selecting it you will get another PowerShell window as a second tab.

If you click the down arrow you will see that you can pick from the default three shell terminal window that you can open. As you select each one it will open a new tab for you.

Here I am going to show you how I customised it to make it work more for myself using 1.5.10271.0 of Windows Terminal

Adding another command shell

In this first example I am going to add another option to the list of terminals which is to open into bash for me to issues git command. (This is assuming you have already installed git-SCM). On the menu/tab bar click on the down arrow and go to settings. 

Notepad should open with the file named “settings.json”. Locate the “list” section

You should see the three default tab options for windows terminal which are PowerShell, Azure Cloud shell and Command prompt. We are now going to add our bash one and the format required at minimum is like below
We are going to add the block next on the “list” section and as it is a JSON format file we need to ensure we add “,” on the previous block. If you already have the path to the executable in your Windows system environment then on the command line you can just type in the executable name. If you don’t have it in your Windows system environment variables then you will need to point it to the path of your executable. For any “\” you will need escape them by using “\\”. You can see in the example below

Save the file and close the settings.json file. If there are any syntax errors you will see a screen similar to this one which will highlight whereabouts your error is. If you click “OK” it could still work depending on where your error was and every time you launch Windows Terminal you will receive the error message until you fix it.
If you have got the syntax correct you should now have another terminal to select from which I have labelled as “Bash Demo”
When you execute it you should see it load into a bash shell


Changing Starting Directory

As you start each of the shell window you will notice that they all start at varies places based on how you started Window terminal. Now we might want to change this because you might want PowerShell to start in the directory where your scripts are held. To make the change add “startingDirectory” within one of the lists/terminal blocks. For example, below I have added the starting directory for my PowerShell terminal to be at c:\scripts
Remember for traversing directory’s you will need to use “\\” to escape the path correctly. Once you run the PowerShell terminal you will see that the working/starting directory has changed to the one I have specified.

Passing command to shell on start-up as well

Another tip is that not only you want the command shell but you also want to pass on some commands. For example, I want to be able to connect to my Azure subscription or one of my VMware vCenter servers. Obviously, you don’t want to include your username or password on the system but at least you can pass one your Azure subscription ID or vCenter name so that you have less commands to type. On the menu/tab bar click on the down arrow and go to settings which notepad should open up settings.json. Locate the "list section"
We will add a new block after the “Bash Demo” one. For PowerShell to execute a command and not close the windows immediately we need to issue “-NoExit”. We know that to connect to an Azure subscription we use the following Azure PowerShell cmdlet “Login-AzAccount” and to pass on the subscription ID we use “-SubscriptionId” So the full command would be “powershell.exe -NoExit -Command LoginAzAccount -SubscriptionId %YourSubscriptionID%”
For my VMware one it would be similar to the command below
So overall my list section in the settings.json file is as per below;
As you can see you can do quite a bit of customisation and make it work for you. The following location at Microsoft has more information on Windows Terminal and how to customise it more (https://docs.microsoft.com/en-gb/windows/terminal/customize-settings/startup)



New Azure KMS IP and domain Addresses for activation

For Windows virtual machines deployed into Azure using marketplace images you may have created rules in your NSG or firewalls to allow the s...