Services overview – Corexpert & TeamWork Blog https://blog.corexpert.net Ressources pour les architectes et développeurs cloud AWS Fri, 09 Apr 2021 08:12:30 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.4 https://blog.corexpert.net/wp-content/uploads/2016/09/cxp.png Services overview – Corexpert & TeamWork Blog https://blog.corexpert.net 32 32 What are cloudformations custom resources and how to use them https://blog.corexpert.net/en/2020/11/18/what-are-cloudformations-custom-resources-and-how-to-use-them-2/ Wed, 18 Nov 2020 11:02:19 +0000 https://blog.corexpert.net/?p=1024 Read More Read More

]]>
What is AWS Cloudformation ?

Cloudformation is a very important tool in the life of people working on AWS cloud platform. It helps you to quickly implement any AWS resources, in a fast and reliable way. It is the best example of what we call IaC :Infrastructure as Code.

Cloudformation makes deployment easier, maintenance and updates for entire environments, but also templatises your infrastrucuture, making it easy to reuse it.

AWS services are continuously updated with a lot of nice new features, and each service has its own lifecycle. We can compare AWS cloud platform to a huge microservices’ infrastructure.

Nonetheless, we might wonder if cloudformation is always up-to-date with all the other services.

This is not the case. Sometimes, a feature or a specific configuration could be missing in the cloudformation code, and you won’t be able to continue this way.

Let’s see how to use cloudformation and create every AWS ressources with any configuration we want !

Custom resources to the rescue

Luckily, AWS cloudformation custom resources are here to help.

Custom resources allow you to programmatically provision AWS resources anytime a cloudformation stack has been created, updated or deleted.

So, how does it work?

In a very simple way, on a change, cloudformation will call a lambda function with a specific event for trigger, and will wait a callback from this function, to define if the resource has been successfully modified or not.

Here we can notice that we can use our favorite SDK to provision AWS resources thanks to the use of lambdas (Python, Java, Node.js, …)

As explained above, the cloudformation custom resource will call a lambda, and will use the event input topass essential information and parameters to your function. You can find below a sample of what the input will look like:

{
   "RequestType" : "Create",
   "ResponseURL" : "http://pre-signed-S3-url-for-response",
   "StackId" : "arn:aws:cloudformation:us-west-2:123456789012:stack/stack-name/guid",
   "RequestId" : "unique id for this create request",
   "ResourceType" : "Custom::TestResource",
   "LogicalResourceId" : "MyTestResource",
   "ResourceProperties" : {
      "Name" : "Value",
      "List" : [ "1", "2", "3" ]
   }
}

NB : The ResourceProperties input is where you can customize the parameters of your function

You can notice that a response url is also given by cloudformation. This URL corresponds to the endpoint to call back when your function is done, and it is waiting to know the status of your actions. Please find below a sample of what the response url is waiting for:

{
   "Status" : "SUCCESS",
   "PhysicalResourceId" : "TestResource1",
   "StackId" : "arn:aws:cloudformation:us-west-2:123456789012:stack/stack-name/guid",
   "RequestId" : "unique id for this create request",
   "LogicalResourceId" : "MyTestResource",
   "Data" : {
      "OutputName1" : "Value1",
      "OutputName2" : "Value2",
   }
}

Custom resources: use case & implementation

As I often do when I work for clients, I first build all the solutions “by hand”, using the console, allowing me to be fast for the validation of the Proof Of Concept with them. After this first step, we can start to build the IaC using cloudformation, allowing us to have a template that will be used for all the working environments, up to the production.

By working on an AWS AppStream project, I have been surprised to see that it was impossible to join IAM roles neither to the Image Builder nor to the Fleet using cloudformation native resources.

Checking the documentation, I saw quickly that it was possible to do it using the AWS SDK, so I chose to use Cloudformation Custom resources to build both the Image Builder and the Fleet.

Building the custom resource lambda function

First, I wrote the lambda code that will create and delete the needed resources.

We retrieve the parameters from the event:

appstream_image_builder = event['ResourceProperties']['AppstreamImageBuilder']
appstream_fleet = event['ResourceProperties']['AppstreamFleet']

Then, we check in the event what is the request type, and we do actions accordingly:

  • Create :

if event['RequestType'] == "Create":
    
      LOGGER.info('appstream_image_builder: \n %s', appstream_image_builder)
      
      LOGGER.info('Creating ilmage builder')
      
      response = client.create_image_builder(
          Name=appstream_image_builder['Name'],
          ImageName=appstream_image_builder['ImageName'],
          InstanceType=appstream_image_builder['InstanceType'],
          Description=appstream_image_builder['Description'],
          DisplayName=appstream_image_builder['DisplayName'],
          VpcConfig={
              'SubnetIds': appstream_image_builder['SubnetIds'],
              'SecurityGroupIds': appstream_image_builder['SecurityGroupIds']
          },
          IamRoleArn=appstream_image_builder['IamRoleArn'],
          DomainJoinInfo={
              'DirectoryName': appstream_image_builder['DirectoryName'],
              'OrganizationalUnitDistinguishedName': appstream_image_builder['OrganizationalUnitDistinguishedName']
          },
          Tags=appstream_image_builder['Tags']
      )
      
      LOGGER.info("Image bulder created")
      LOGGER.info("Creating fleet")
      
      LOGGER.info('appstream_fleet: \n %s', appstream_fleet)
      
      LOGGER.info('Creatin fleet')
      
      response = client.create_fleet(
          Name=appstream_fleet['Name'],
          ImageName=appstream_fleet['ImageName'],
          InstanceType=appstream_fleet['InstanceType'],
          FleetType=appstream_fleet['FleetType'],
          ComputeCapacity={
              'DesiredInstances': int(appstream_fleet['DesiredInstances'])
          },
          VpcConfig={
              'SubnetIds': appstream_fleet['SubnetIds'],
              'SecurityGroupIds': appstream_fleet['SecurityGroupIds']
              },
          Description=appstream_fleet['Description'],
          DisplayName=appstream_fleet['DisplayName'],
          DomainJoinInfo={
              'DirectoryName': appstream_fleet['DirectoryName'],
              'OrganizationalUnitDistinguishedName': appstream_fleet['OrganizationalUnitDistinguishedName']
          },
          Tags=appstream_fleet['Tags'],
          IamRoleArn=appstream_fleet['IamRoleArn'],
          StreamView=appstream_fleet['StreamView']
      )
      
      LOGGER.info("Fleet created")
      send_response(event, context, "SUCCESS", {"Message": "Resource creation successful!"})

  • Delete : In this example, the fleet has to be stopped before deleting

elif event['RequestType'] == "Delete":
      LOGGER.info('Deleting Image builder')
      response = client.delete_image_builder(
        Name=appstream_image_builder['Name']
      )
      
      LOGGER.info('Stoping fleet')
      response = client.stop_fleet(
        Name=appstream_fleet['Name']
      )
      
      LOGGER.info('Waiting for fleet to be stopped ...')
      response = client.describe_fleets(
        Names=[
            appstream_fleet['Name'],
        ]
      )
      state=response['Fleets'][0]['State']

      while state != 'STOPPED':    
        time.sleep(5)
        response = client.describe_fleets(
            Names=[
                "adobeAndNotepadd-Fleet",
            ]
        )  
        state = response['Fleets'][0]['State']
        LOGGER.info('Waiting for fleet to be stopped ...')
      
      LOGGER.info('Waiting for fleet to be stopped ...')
      response = client.delete_fleet(
        Name=appstream_fleet['Name']
      )
      
      send_response(event, context, "SUCCESS", {"Message": "Resource deleted successful!"})

As explained above in this article, the cloudformation custom resource is waiting for a status from your lambda, which will be sent by an http call on a custom url. There is a code sample that allows you to do so, knowing that the custom URL is given in the lambda event (you can see the calls to this functions in previous examples) :

def send_response(event, context, response_status, response_data):
  '''Send a resource manipulation status response to CloudFormation'''
  response_body = json.dumps({
      "Status": response_status,
      "Reason": "See the details in CloudWatch Log Stream: " + context.log_stream_name,
      "PhysicalResourceId": context.log_stream_name,
      "StackId": event['StackId'],
      "RequestId": event['RequestId'],
      "LogicalResourceId": event['LogicalResourceId'],
      "Data": response_data
  })
  
  LOGGER.info('ResponseURL: %s', event['ResponseURL'])
  LOGGER.info('ResponseBody: %s', response_body)
  
  opener = build_opener(HTTPHandler)
  request = Request(event['ResponseURL'], data=response_body)
  request.add_header('Content-Type', '')
  request.add_header('Content-Length', len(response_body))
  request.get_method = lambda: 'PUT'
  response = opener.open(request)
  LOGGER.info("Status code: %s", response.getcode())
  LOGGER.info("Status message: %s", response.msg)

Building the cloudformation template

Then I created the lambda and the custom resource in cloudformation.

You will see that the Image Builder and the Fleet resource is a Custom::IBAndFleetBuilder type, and the ServiceToken field corresponds to the lambda ARN.

ImageBuilderAndFleetCreationFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket: "appstream-image-sources"
        S3Key: "ImageBuilderAndFleetCreationFunction/handler.zip"
      Handler: "handler.handler"
      Timeout: 300
      Runtime: python2.7
      Role: 
        Fn::ImportValue:
          !Sub "${Env}-LambdaAppstreamCreationRoleArn"
  
  ImageBuilderAndFleetCreationCustom:
    Type: Custom::IBAndFleetBuilder
    Properties:
      ServiceToken: !GetAtt ImageBuilderAndFleetCreationFunction.Arn
      
      AppstreamImageBuilder:
        Name: !Sub "${Project}-Image-Builder"
        ImageName: !Ref ImageBuilderBaseImageName
        InstanceType: !Ref ImageBuilderInstanceType
        Description: !Sub "Appstream Image builder for projet ${Project}"
        DisplayName: !Sub "${Project}-Image-Builder"
        SubnetIds: !Ref ImageBuilderSubnetlist
        SecurityGroupIds: !Ref ImageBuilderSecuritygroupslist
        IamRoleArn: 
          Fn::ImportValue:
            !Sub "${Env}-AppstreamImageBuilderRoleArn"
        DirectoryName: !Ref ImageBuilderDirectoryName
        OrganizationalUnitDistinguishedName: !Ref ImageBuilderOrganizationalUnitDistinguishedName
        Tags: 
            Env: !Ref Env
            Project: !Ref Project
            
      AppstreamFleet:
        Name: !Sub "${Project}-Fleet"
        ImageName: !Ref FleetDefaultImageName
        InstanceType: !Ref FleetInstanceType
        FleetType: !Ref FleetType
        DesiredInstances: !Ref FleetDesiredInstances
        SubnetIds: !Ref FleetSubnetlist
        SecurityGroupIds: !Ref FleetSecuritygroupslist
        Description: !Sub "Appstream Fleet for projet ${Project}"
        DisplayName: !Sub "${Project}-Fleet"
        DirectoryName: !Ref FleetDirectoryName
        OrganizationalUnitDistinguishedName: !Ref FleetOrganizationalUnitDistinguishedName
        Tags:
          Env: !Ref Env
          Project: !Ref Project
        IamRoleArn: 
          Fn::ImportValue:
            !Sub "${Env}-AppstreamFleetRoleArn"
        StreamView: !Ref FleetStreamView

Conclusion

Thanks to the custom resource, I managed to deliver to the customer a fully efficient cloudformation template, that he can use to automatically create all of his AppStream resources in one click, but also delete when they don’t need it anymore.

This article provides you a small overview of what can be done with the cloudformation resources.

Furthermore, it is important to note that the cloudformation stack also calls the custom resource when there is an update, and that we could implement an update function that will update the AWS resources.

]]>
TeamWork and Corexpert achieve 50 AWS certifications ! https://blog.corexpert.net/en/2018/10/05/teamwork-and-corexpert-achieve-50-aws-certifications/ Fri, 05 Oct 2018 13:50:30 +0000 https://blog.corexpert.net/?p=548 Read More Read More

]]>
AWS is an environment that is always shifting and evolving : there are numerous updates and news which drive the continuous innovation from AWS. 

How do you verify the competency for building, deploying and maintaining the infrastructure offered by the platform ?
With the certifications proposed by AWS ! 

Categorized in competencies, these exams reward counseling the right services, promote a well architected framework and check the knowledge of the candidate based on deep professional hands on experience.  aws-certifications-5

Today, TeamWork and Corexpert have reach a milestone in the total number of certifications obtained : our team has 50 certifications. This commitment in the AWS cloud is a testimony of adaptation and capacity to answer at a large panel of requests from customers. Our experts are ready to be challenged by your interrogations and will guide you to success on AWS projects. Cost optimization, migration from on-premise, tasks automatization are some problematics (and many more !) which our team can help for your journey to the cloud.  

TeamWork and Corexpert have also a competency in the integration of SAP on AWS and are a priviledged partner for AppStream 2.0.

]]>
First steps with Amazon EC2 Container Service (101) – Cluster & task https://blog.corexpert.net/en/2018/01/17/first-steps-with-amazon-ec2-container-service-101-cluster-task/ https://blog.corexpert.net/en/2018/01/17/first-steps-with-amazon-ec2-container-service-101-cluster-task/#respond Wed, 17 Jan 2018 12:54:25 +0000 https://blog.corexpert.net/?p=446 Read More Read More

]]>
We continue the serie “First steps with Amazon EC2 Container Service” focusing on the tasks and clusters that will host them.

1 – What is a Cluster

An ECS cluster is a group of EC2 instances that will host your containers

Exemple d'un cluster ECS
ECS Cluster

A cluster can contain one or more instances of different types and sizes. In our case, we will be using a t2.micro.

2 – ECS cluster creation

We are going to connect to the aws web console for ecs. We will go to the “Cluster” section on the left menu.On the next screen (Cluster list), we are going to click on “Create Cluster” so we can create our first cluster.

On creating cluster screen we have a very complete form with many options. We are going to use the following fields :

  • Cluster Name : helloworldCluster
  • Provisioning Model : On-Demand Instance
  • EC2 instance type : t2.micro
  • VPC : choose the VPC where you want to create your instances
  • Security group : choose/create a security group that opens port 80

Click the “Create” blue button to create the cluster.
It should now appear in the clusters list

ECS Cluster
ECS Cluster

3 – Task definition or how to define the containers launch

A task definition is a list of parameters which will determine how our containers will be launched of.

To create it, we will go to “Task definitions”on the left menu, then we will click the “Create new Task Definition” blue button.

This first task definition will allow us to launch a container, with its HTTP (80) port linked to the host instance’s port 8080.

For our case we are going to fill the following fields:

  • Task Definition Name : Helloworld-1
  • Container Definitions : click “add container”
    • Container name : Helloworld
    • Image : 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helloworld:latest
    • Memory Limits (MB) : 128
    • Port mappings :
      • Host : 8080
      • Container : 80
      • protocol : tcp
    • Click the button “Add”

You can then click the blue “Create” button

ECS Task definition
ECS Task definition

Plenty of other options are available in the Container Definitions, to allow a precise definition of the needs of the containers (Mapping of volumes, link between containers …)

4 – Task execution

Now that we have created our host machine cluster and we have described how to launch the container, we can finally run it.

To do this, we will click on the “Cluster” menu and choose from the list our helloworldCluster cluster. We get to here:

ECS Cluster Hello world
ECS Cluster Hello world

Click on the “Task” tab and then on the “Run new Task” button
In the next screen we will choose all the elements we created previously

ECS run task
ECS run task

The task is created and ready to receive messages

ecs-task-run-ok

5 – Connection to the container

All that remains is to validate that the container respond our HTTP requests.

We will connect directly to the host instance on port 8080.
To find its IP, just click on the name of the task (in our case 67f6a5b4-4 …) to display the details.

In the container part click on the triangle next to the container name (helloworld) to display the details, the IP will be in the section “Network Bindings”

ECS host IP
ECS host IP

You can then go to your preferred browser, and enter the url of the container to display the page

ECS task result
ECS task result

Congratulations, you have ran your first container on Amazon EC2 Container Service.

In the next article we will see how to go further by creating a service based on the existing container image. Thus several containers will be launched and will be able to respond to your requests

]]>
https://blog.corexpert.net/en/2018/01/17/first-steps-with-amazon-ec2-container-service-101-cluster-task/feed/ 0
First steps with Amazon EC2 Container Service (101) – ECR https://blog.corexpert.net/en/2017/08/30/first-steps-with-amazon-ec2-container-service-101-ecr/ https://blog.corexpert.net/en/2017/08/30/first-steps-with-amazon-ec2-container-service-101-ecr/#respond Wed, 30 Aug 2017 07:33:50 +0000 https://blog.corexpert.net/?p=382 Read More Read More

]]>
Docker is a technology that has been doing a huge echo in the IT field since the past few years.

docker-vs-aws on google search
docker-vs-aws on google search

All major public clouds (Amazon Web Service, Google Cloud, Microsoft Azure) offer a more or less integrated solution for container management. In these posts we will explain how to start successfully on the managed service Amazon EC2 Container Service.

This serie suppose that you already have created a VPC where the host instances for the docker containers will be created, that the AWS CLI is installed on your computer and we are going to use the “Hello-world” image (dockercloud/hello-world) available on the docker hub.

1 – Creation of a Docker private repository

To be deployed , a container image must be available in a docker repository. There is multiple solution that can be used , Amazon Web services (AWS) proposes a private managed repository (ECR) at an interesting price (0,1$/GB/month on 01/08/2017).

To deploy our image on the ECR (Amazon EC2 Container Registry) we will start by pulling this image from the docker hub.

docker pull dockercloud/hello-world

Now we have our image, we will push it on ECR
Then, we will connect to the AWS web console for ecs

1a – if you still haven’t use the service yet

You will find yourself on the “Get started” page, you just have to push the blue “get started“ button in the middle of the page.

On the next screen (Getting Started with Amazon EC2 Container Service), uncheck the box so the ECS demo (sample) will not be deployed

getstarted

1b – If you have already used ECS

You just have to go to “Repositories” then click on “Create Repository”

2 – On the next page

You will be able to give a name to your repository , helloworld on our case. Then you can click on “Next step” button

ecr-name

3 – The final page

It will give us all the information needed to use the created repository.

2 – Add an image to ECR

We are going to push our HelloWorld image to the repository. To do that we are going to connect on ECR from our machine (our repository is in ireland)

aws ecr get-login --no-include-email --region eu-west-1
 docker login -u AWS -p eyJwRXl…...iMSIsZnR5cGUKOiJEQWRBX0tFWSJ8 https://123456789012.dkr.ecr.eu-west-1.amazonaws.com

We get a commande line that will allow us to connect to ECR

docker login -u AWS -p eyJwRXl…...iMSIsZnR5cGUKOiJEQWRBX0tFWSJ8 https://123456789012.dkr.ecr.eu-west-1.amazonaws.com
 Login Succeeded

Now we are connected, we are able the push our local image

docker tag dockercloud/hello-world:latest 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helloworld:latest
docker push 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helloworld:latest

If you connect on the AWS web console , you will be able to see our image in the repository

ecr-push

In the next post we are going to use this image to launch a docker container on ECS.

See you soon

]]>
https://blog.corexpert.net/en/2017/08/30/first-steps-with-amazon-ec2-container-service-101-ecr/feed/ 0