Corexpert & TeamWork Blog https://blog.corexpert.net Ressources pour les architectes et développeurs cloud AWS Fri, 09 Apr 2021 08:12:30 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.4 https://blog.corexpert.net/wp-content/uploads/2016/09/cxp.png Corexpert & TeamWork Blog https://blog.corexpert.net 32 32 Working with Amazon AppStream 2.0 https://blog.corexpert.net/en/2021/04/08/working-with-amazon-appstream-2-0/ Thu, 08 Apr 2021 09:05:15 +0000 https://blog.corexpert.net/?p=1083 Read More Read More

]]>
What is Amazon AppStream 2.0 ?

Ever wanted to serve your business applications directly through an internet browser? Do you want a powerful graphical computing and rendering environment directly accessible? AppStream 2.0 increases the possibilities while providing a secure, efficient and scalable environment for your users.

Amazon AppStream 2.0 is a fully managed non-persistent application and desktop streaming service that provides users with instant access to Windows desktop applications from anywhere on any device.

How AppStream 2.0 can help your team:

  • Make a desktop app largely available with no rewrite by letting user access and run them through their browser
  • Scale without infrastructure, as the service manages the AWS resources required to host and run the application, scales automatically and provides access to users on-demand
  • Pay-as-you-go with no upfront investment and no infrastructure to maintain, as you only pay for the streaming resources that you use plus a small monthly fee per streaming user
  • Each user’s experience is fluid and responsive because the applications run on virtual machines (VMs) optimized for each use case
  • To create a familiar experience for the customers, the appearance of AppStream 2.0 can be customizedwith branding images, text, and website links

How AppStream 2.0 helps ISVs (Independent Software Vendors):

As a software vendor, you can accelerate the adoption of your desktop application by providing a SaaS version through VDI with no rewrite. Reach a faster Time to Market and increased customer reach, and benefit of the values of SaaS without incurring the overhead of a major refactoring of software. As Amazon AppStream is a fully Managed Service, ISVs don’t need to plan, deploy, manage, or upgrade any application streaming infrastructure, and instantly benefit of global availability and pay-as-you-go pricing.

To improve the software trials, demos and training experience for your customers, you can also provide software trials on-demand, with no local installs, or special hardware required.

How AppStream aligns with our cloud vision

With the increasing work-from-home situation for many companies, ensuring access to applications that require high levels of performance and compute resources can be critical to maintain business continuity. VDI solutions such as Amazon AppStream is the best way to do it, as it only requires the user to have access to a browser and an internet connection.

As an AWS managed service, AppStream is a great tool to initialize or accelerate a project, by giving access quickly to a desktop application through the user’s browser. The service provides a number of options to manage storage with home folders, personalize multiple user settings, choose between user pools or Active Directory for user management, and much more.

This flexibility lets us design the perfect solution around AppStream to answer the needs of our customers. By leveraging the additional capabilities of the AWS platform, we can build a custom and secure environment to meet the needs. Being fully managed by AWS, the technical level required to administrate an AppStream fleet boils down to monitoring its utilization.

How and why we use AppStream

Because the best way to help our customers design and build their solutions is to use the services ourselves, we have been using AppStream for multiple years.

After joining the TeamWork group in September 2017, Corexpert integrated its accounting within a few weeks in SAP. From January 2018, all of Corexpert’s management is operational in SAP: orders, invoicing, entry of time to invoice, monitoring of absences, expense reports, workflow, …

In order to allow our colleagues to use the SAP connection client (SAP GUI, pronounce “gooey” or G-U-I for Graphic User Interface) they therefore needed a client to install on their local workstation as well as an individual VPN account in order to access the group’s SAP infrastructures. As the Mac client is not native, a JAVA version is available …

Corexpert is “born in the cloud” and as far as possible we use resources in Amazon Web Services for our applications, or SaaS applications. We are not really used to deploying on user workstations and not seeing how to do this with Jenkins, we have decided to implement Amazon AppStream in order to accelerate this deployment and ensure that it is carried out without impact on our users.

Read more (in French) here: https://blog.corexpert.net/2018/03/17/corexpert-deploie-sap-gui-avec-amazon-appstream/

 

For any AppStream or other AWS-related projects, please feel free to contact us: https://www.corexpert.net/contact-us/

 

]]>
What are cloudformations custom resources and how to use them https://blog.corexpert.net/en/2020/11/18/what-are-cloudformations-custom-resources-and-how-to-use-them-2/ Wed, 18 Nov 2020 11:02:19 +0000 https://blog.corexpert.net/?p=1024 Read More Read More

]]>
What is AWS Cloudformation ?

Cloudformation is a very important tool in the life of people working on AWS cloud platform. It helps you to quickly implement any AWS resources, in a fast and reliable way. It is the best example of what we call IaC :Infrastructure as Code.

Cloudformation makes deployment easier, maintenance and updates for entire environments, but also templatises your infrastrucuture, making it easy to reuse it.

AWS services are continuously updated with a lot of nice new features, and each service has its own lifecycle. We can compare AWS cloud platform to a huge microservices’ infrastructure.

Nonetheless, we might wonder if cloudformation is always up-to-date with all the other services.

This is not the case. Sometimes, a feature or a specific configuration could be missing in the cloudformation code, and you won’t be able to continue this way.

Let’s see how to use cloudformation and create every AWS ressources with any configuration we want !

Custom resources to the rescue

Luckily, AWS cloudformation custom resources are here to help.

Custom resources allow you to programmatically provision AWS resources anytime a cloudformation stack has been created, updated or deleted.

So, how does it work?

In a very simple way, on a change, cloudformation will call a lambda function with a specific event for trigger, and will wait a callback from this function, to define if the resource has been successfully modified or not.

Here we can notice that we can use our favorite SDK to provision AWS resources thanks to the use of lambdas (Python, Java, Node.js, …)

As explained above, the cloudformation custom resource will call a lambda, and will use the event input topass essential information and parameters to your function. You can find below a sample of what the input will look like:

{
   "RequestType" : "Create",
   "ResponseURL" : "http://pre-signed-S3-url-for-response",
   "StackId" : "arn:aws:cloudformation:us-west-2:123456789012:stack/stack-name/guid",
   "RequestId" : "unique id for this create request",
   "ResourceType" : "Custom::TestResource",
   "LogicalResourceId" : "MyTestResource",
   "ResourceProperties" : {
      "Name" : "Value",
      "List" : [ "1", "2", "3" ]
   }
}

NB : The ResourceProperties input is where you can customize the parameters of your function

You can notice that a response url is also given by cloudformation. This URL corresponds to the endpoint to call back when your function is done, and it is waiting to know the status of your actions. Please find below a sample of what the response url is waiting for:

{
   "Status" : "SUCCESS",
   "PhysicalResourceId" : "TestResource1",
   "StackId" : "arn:aws:cloudformation:us-west-2:123456789012:stack/stack-name/guid",
   "RequestId" : "unique id for this create request",
   "LogicalResourceId" : "MyTestResource",
   "Data" : {
      "OutputName1" : "Value1",
      "OutputName2" : "Value2",
   }
}

Custom resources: use case & implementation

As I often do when I work for clients, I first build all the solutions “by hand”, using the console, allowing me to be fast for the validation of the Proof Of Concept with them. After this first step, we can start to build the IaC using cloudformation, allowing us to have a template that will be used for all the working environments, up to the production.

By working on an AWS AppStream project, I have been surprised to see that it was impossible to join IAM roles neither to the Image Builder nor to the Fleet using cloudformation native resources.

Checking the documentation, I saw quickly that it was possible to do it using the AWS SDK, so I chose to use Cloudformation Custom resources to build both the Image Builder and the Fleet.

Building the custom resource lambda function

First, I wrote the lambda code that will create and delete the needed resources.

We retrieve the parameters from the event:

appstream_image_builder = event['ResourceProperties']['AppstreamImageBuilder']
appstream_fleet = event['ResourceProperties']['AppstreamFleet']

Then, we check in the event what is the request type, and we do actions accordingly:

  • Create :

if event['RequestType'] == "Create":
    
      LOGGER.info('appstream_image_builder: \n %s', appstream_image_builder)
      
      LOGGER.info('Creating ilmage builder')
      
      response = client.create_image_builder(
          Name=appstream_image_builder['Name'],
          ImageName=appstream_image_builder['ImageName'],
          InstanceType=appstream_image_builder['InstanceType'],
          Description=appstream_image_builder['Description'],
          DisplayName=appstream_image_builder['DisplayName'],
          VpcConfig={
              'SubnetIds': appstream_image_builder['SubnetIds'],
              'SecurityGroupIds': appstream_image_builder['SecurityGroupIds']
          },
          IamRoleArn=appstream_image_builder['IamRoleArn'],
          DomainJoinInfo={
              'DirectoryName': appstream_image_builder['DirectoryName'],
              'OrganizationalUnitDistinguishedName': appstream_image_builder['OrganizationalUnitDistinguishedName']
          },
          Tags=appstream_image_builder['Tags']
      )
      
      LOGGER.info("Image bulder created")
      LOGGER.info("Creating fleet")
      
      LOGGER.info('appstream_fleet: \n %s', appstream_fleet)
      
      LOGGER.info('Creatin fleet')
      
      response = client.create_fleet(
          Name=appstream_fleet['Name'],
          ImageName=appstream_fleet['ImageName'],
          InstanceType=appstream_fleet['InstanceType'],
          FleetType=appstream_fleet['FleetType'],
          ComputeCapacity={
              'DesiredInstances': int(appstream_fleet['DesiredInstances'])
          },
          VpcConfig={
              'SubnetIds': appstream_fleet['SubnetIds'],
              'SecurityGroupIds': appstream_fleet['SecurityGroupIds']
              },
          Description=appstream_fleet['Description'],
          DisplayName=appstream_fleet['DisplayName'],
          DomainJoinInfo={
              'DirectoryName': appstream_fleet['DirectoryName'],
              'OrganizationalUnitDistinguishedName': appstream_fleet['OrganizationalUnitDistinguishedName']
          },
          Tags=appstream_fleet['Tags'],
          IamRoleArn=appstream_fleet['IamRoleArn'],
          StreamView=appstream_fleet['StreamView']
      )
      
      LOGGER.info("Fleet created")
      send_response(event, context, "SUCCESS", {"Message": "Resource creation successful!"})

  • Delete : In this example, the fleet has to be stopped before deleting

elif event['RequestType'] == "Delete":
      LOGGER.info('Deleting Image builder')
      response = client.delete_image_builder(
        Name=appstream_image_builder['Name']
      )
      
      LOGGER.info('Stoping fleet')
      response = client.stop_fleet(
        Name=appstream_fleet['Name']
      )
      
      LOGGER.info('Waiting for fleet to be stopped ...')
      response = client.describe_fleets(
        Names=[
            appstream_fleet['Name'],
        ]
      )
      state=response['Fleets'][0]['State']

      while state != 'STOPPED':    
        time.sleep(5)
        response = client.describe_fleets(
            Names=[
                "adobeAndNotepadd-Fleet",
            ]
        )  
        state = response['Fleets'][0]['State']
        LOGGER.info('Waiting for fleet to be stopped ...')
      
      LOGGER.info('Waiting for fleet to be stopped ...')
      response = client.delete_fleet(
        Name=appstream_fleet['Name']
      )
      
      send_response(event, context, "SUCCESS", {"Message": "Resource deleted successful!"})

As explained above in this article, the cloudformation custom resource is waiting for a status from your lambda, which will be sent by an http call on a custom url. There is a code sample that allows you to do so, knowing that the custom URL is given in the lambda event (you can see the calls to this functions in previous examples) :

def send_response(event, context, response_status, response_data):
  '''Send a resource manipulation status response to CloudFormation'''
  response_body = json.dumps({
      "Status": response_status,
      "Reason": "See the details in CloudWatch Log Stream: " + context.log_stream_name,
      "PhysicalResourceId": context.log_stream_name,
      "StackId": event['StackId'],
      "RequestId": event['RequestId'],
      "LogicalResourceId": event['LogicalResourceId'],
      "Data": response_data
  })
  
  LOGGER.info('ResponseURL: %s', event['ResponseURL'])
  LOGGER.info('ResponseBody: %s', response_body)
  
  opener = build_opener(HTTPHandler)
  request = Request(event['ResponseURL'], data=response_body)
  request.add_header('Content-Type', '')
  request.add_header('Content-Length', len(response_body))
  request.get_method = lambda: 'PUT'
  response = opener.open(request)
  LOGGER.info("Status code: %s", response.getcode())
  LOGGER.info("Status message: %s", response.msg)

Building the cloudformation template

Then I created the lambda and the custom resource in cloudformation.

You will see that the Image Builder and the Fleet resource is a Custom::IBAndFleetBuilder type, and the ServiceToken field corresponds to the lambda ARN.

ImageBuilderAndFleetCreationFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket: "appstream-image-sources"
        S3Key: "ImageBuilderAndFleetCreationFunction/handler.zip"
      Handler: "handler.handler"
      Timeout: 300
      Runtime: python2.7
      Role: 
        Fn::ImportValue:
          !Sub "${Env}-LambdaAppstreamCreationRoleArn"
  
  ImageBuilderAndFleetCreationCustom:
    Type: Custom::IBAndFleetBuilder
    Properties:
      ServiceToken: !GetAtt ImageBuilderAndFleetCreationFunction.Arn
      
      AppstreamImageBuilder:
        Name: !Sub "${Project}-Image-Builder"
        ImageName: !Ref ImageBuilderBaseImageName
        InstanceType: !Ref ImageBuilderInstanceType
        Description: !Sub "Appstream Image builder for projet ${Project}"
        DisplayName: !Sub "${Project}-Image-Builder"
        SubnetIds: !Ref ImageBuilderSubnetlist
        SecurityGroupIds: !Ref ImageBuilderSecuritygroupslist
        IamRoleArn: 
          Fn::ImportValue:
            !Sub "${Env}-AppstreamImageBuilderRoleArn"
        DirectoryName: !Ref ImageBuilderDirectoryName
        OrganizationalUnitDistinguishedName: !Ref ImageBuilderOrganizationalUnitDistinguishedName
        Tags: 
            Env: !Ref Env
            Project: !Ref Project
            
      AppstreamFleet:
        Name: !Sub "${Project}-Fleet"
        ImageName: !Ref FleetDefaultImageName
        InstanceType: !Ref FleetInstanceType
        FleetType: !Ref FleetType
        DesiredInstances: !Ref FleetDesiredInstances
        SubnetIds: !Ref FleetSubnetlist
        SecurityGroupIds: !Ref FleetSecuritygroupslist
        Description: !Sub "Appstream Fleet for projet ${Project}"
        DisplayName: !Sub "${Project}-Fleet"
        DirectoryName: !Ref FleetDirectoryName
        OrganizationalUnitDistinguishedName: !Ref FleetOrganizationalUnitDistinguishedName
        Tags:
          Env: !Ref Env
          Project: !Ref Project
        IamRoleArn: 
          Fn::ImportValue:
            !Sub "${Env}-AppstreamFleetRoleArn"
        StreamView: !Ref FleetStreamView

Conclusion

Thanks to the custom resource, I managed to deliver to the customer a fully efficient cloudformation template, that he can use to automatically create all of his AppStream resources in one click, but also delete when they don’t need it anymore.

This article provides you a small overview of what can be done with the cloudformation resources.

Furthermore, it is important to note that the cloudformation stack also calls the custom resource when there is an update, and that we could implement an update function that will update the AWS resources.

]]>
Corexpert Achieves AWS DevOps Competency Status https://blog.corexpert.net/en/2020/09/07/corexpert-achieves-aws-devops-competency-status/ Mon, 07 Sep 2020 09:30:49 +0000 https://blog.corexpert.net/?p=1006 Read More Read More

]]>
LYON, September 09, 2020Corexpert is pleased to announce the Amazon Web Services (AWS) DevOps Competency status.

This designation recognizes that Corexpert provides a holistic approach to Devops implementation and that it has a proven track record of implementations of Devops tools.

This competency furthermore proves to our customers that the quality and calibre of our AWS-specialist consultancy is of the highest possible standard.

Corexpert has namely a proven customer success with specific focus on Continuous Integration & Continuous Delivery, Monitoring Performance and Infrastructure as Code.

“This AWS DevOps Competency is an important achievement in Corexpert’s journey” said Alexis Dagues CEO.

“This latest competency alongside our accreditation to deliver Well Architected Reviews and the SAP Competency is a statement that our team is dedicated to helping companies achieve their technology goals by leveraging the agility, breadth of services, and pace of innovation that AWS provides.”

 

About AWS

AWS is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify Consulting and Technology APN Partners with deep industry experience and expertise.

 

About Corexpert

Corexpert is a cloud-native company with 14 years of experience. AWS Advanced Consulting Partner and Value-Added Reseller. The company joined TeamWork Group (in 2017) and deploy AWS competency in all of its 18 locations world-wide. With a strong expertise on SAP workloads, Big-Data, Web stacks and massive migration to AWS, Corexpert and TeamWork can deliver globally.

]]>
Corexpert – freshly elected APN Training Partner by AWS https://blog.corexpert.net/en/2020/09/02/corexpert-freshly-elected-apn-training-partner-by-aws/ Wed, 02 Sep 2020 13:23:13 +0000 https://blog.corexpert.net/?p=1000 Read More Read More

]]>
Lyon, France, September 03, 2020/ Corexpert – APN Training Partner

Corexpert which sustains and integrates solutions on AWS Cloud Platform since 2010 is pleased to announce the new APN Training Partner agreement with AWS.

As IT professionals with high AWS cloud skills we have a proven and reliable history in technical trainings. We are now proud to further expand this training and learning opportunities through the APN Training Partner Program.

We will thus convey our expertise to give companies the opportunity to learn best practices and provide them with live feedback from our expert instructors AWS Approved Instructors. These trainings will also help learners to prepare AWS Certification exams, which validate technical skills and expertise.

We also aim to provide the top learning resources and deep knowledge learners need to innovate and build the future using the AWS Cloud. AWS Training content is built on native AWS knowledge and gives learners a path to advance their careers and transform their organizations.

A free of charge virtual webinar will be held on the 29th September 2020 at 10AM (CEST) on the theme: AWS Discovery Day – An introduction to AWS.

Please register here: https://go.aws/2FXAytB

More information : https://www.corexpert.net/contact-us/

corexpert-apn-training-partner

]]>
TeamWork and Corexpert achieve 50 AWS certifications ! https://blog.corexpert.net/en/2018/10/05/teamwork-and-corexpert-achieve-50-aws-certifications/ Fri, 05 Oct 2018 13:50:30 +0000 https://blog.corexpert.net/?p=548 Read More Read More

]]>
AWS is an environment that is always shifting and evolving : there are numerous updates and news which drive the continuous innovation from AWS. 

How do you verify the competency for building, deploying and maintaining the infrastructure offered by the platform ?
With the certifications proposed by AWS ! 

Categorized in competencies, these exams reward counseling the right services, promote a well architected framework and check the knowledge of the candidate based on deep professional hands on experience.  aws-certifications-5

Today, TeamWork and Corexpert have reach a milestone in the total number of certifications obtained : our team has 50 certifications. This commitment in the AWS cloud is a testimony of adaptation and capacity to answer at a large panel of requests from customers. Our experts are ready to be challenged by your interrogations and will guide you to success on AWS projects. Cost optimization, migration from on-premise, tasks automatization are some problematics (and many more !) which our team can help for your journey to the cloud.  

TeamWork and Corexpert have also a competency in the integration of SAP on AWS and are a priviledged partner for AppStream 2.0.

]]>
Optimizing Ad Serving with Machine Learning for cost, performance and quality improvements https://blog.corexpert.net/en/2018/06/15/optimizing-ad-serving-with-machine-learning-for-cost-performance-and-quality-improvements/ Fri, 15 Jun 2018 08:36:31 +0000 https://blog.corexpert.net/?p=495 Read More Read More

]]>
About our client

Their Ad-serving technology is optimized for user experience, and is integrated in over 450 mobile and web applications. They serve more than 1 billion ad requests every day to millions of users with a minimal latency. They embraced the Cloud-First spirit by being 100% hosted on AWS from the beginning and they are using over 50 services to create a cutting-edge technology solution.

The challenge

Nowadays, Ad Tech companies have to handle huge traffic volumes to deliver Ads. They focus on high quality Ads by controlling traffic and bidding on partner programmatic platforms. This strategy reduces the payout per Ad request and therefore limits the revenues of the company. Learning user behaviors and optimizing traffic quickly became inevitable, so they gathered a R&D team to build Machine Learning algorithms.

The solution

User data is processed using an Amazon EMR cluster, running every month and processing tens of billions of user events. Amazon S3 is the primary choice for storing raw data (hundreds of terabytes). An Amazon EC2 instance, packaged with Machine Learning libraries, uses the monthly generated dataset to update the currently trained model and prepare a blacklist, that is loaded in Amazon DynamoDB for high traffic reads.

Machine Learning

The benefits

  • The ML algorithm detects and filters 60% of total traffic being not relevant. The R&D team continuously improves its accuracy.
  • AWS infrastructure costs are reduced by 45%. The company is reinvesting this money into new services and quality improvements.
  • Ads get rendered 7x more often, meaning users have a better experience while watching ads in mobile applications.
  • Higher Ad partner trust and Ad revenues. A direct impact is a visible growth in partner integrations and market visibility.

 

]]>
First steps with Amazon EC2 Container Service (101) – Cluster & task https://blog.corexpert.net/en/2018/01/17/first-steps-with-amazon-ec2-container-service-101-cluster-task/ https://blog.corexpert.net/en/2018/01/17/first-steps-with-amazon-ec2-container-service-101-cluster-task/#respond Wed, 17 Jan 2018 12:54:25 +0000 https://blog.corexpert.net/?p=446 Read More Read More

]]>
We continue the serie “First steps with Amazon EC2 Container Service” focusing on the tasks and clusters that will host them.

1 – What is a Cluster

An ECS cluster is a group of EC2 instances that will host your containers

Exemple d'un cluster ECS
ECS Cluster

A cluster can contain one or more instances of different types and sizes. In our case, we will be using a t2.micro.

2 – ECS cluster creation

We are going to connect to the aws web console for ecs. We will go to the “Cluster” section on the left menu.On the next screen (Cluster list), we are going to click on “Create Cluster” so we can create our first cluster.

On creating cluster screen we have a very complete form with many options. We are going to use the following fields :

  • Cluster Name : helloworldCluster
  • Provisioning Model : On-Demand Instance
  • EC2 instance type : t2.micro
  • VPC : choose the VPC where you want to create your instances
  • Security group : choose/create a security group that opens port 80

Click the “Create” blue button to create the cluster.
It should now appear in the clusters list

ECS Cluster
ECS Cluster

3 – Task definition or how to define the containers launch

A task definition is a list of parameters which will determine how our containers will be launched of.

To create it, we will go to “Task definitions”on the left menu, then we will click the “Create new Task Definition” blue button.

This first task definition will allow us to launch a container, with its HTTP (80) port linked to the host instance’s port 8080.

For our case we are going to fill the following fields:

  • Task Definition Name : Helloworld-1
  • Container Definitions : click “add container”
    • Container name : Helloworld
    • Image : 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helloworld:latest
    • Memory Limits (MB) : 128
    • Port mappings :
      • Host : 8080
      • Container : 80
      • protocol : tcp
    • Click the button “Add”

You can then click the blue “Create” button

ECS Task definition
ECS Task definition

Plenty of other options are available in the Container Definitions, to allow a precise definition of the needs of the containers (Mapping of volumes, link between containers …)

4 – Task execution

Now that we have created our host machine cluster and we have described how to launch the container, we can finally run it.

To do this, we will click on the “Cluster” menu and choose from the list our helloworldCluster cluster. We get to here:

ECS Cluster Hello world
ECS Cluster Hello world

Click on the “Task” tab and then on the “Run new Task” button
In the next screen we will choose all the elements we created previously

ECS run task
ECS run task

The task is created and ready to receive messages

ecs-task-run-ok

5 – Connection to the container

All that remains is to validate that the container respond our HTTP requests.

We will connect directly to the host instance on port 8080.
To find its IP, just click on the name of the task (in our case 67f6a5b4-4 …) to display the details.

In the container part click on the triangle next to the container name (helloworld) to display the details, the IP will be in the section “Network Bindings”

ECS host IP
ECS host IP

You can then go to your preferred browser, and enter the url of the container to display the page

ECS task result
ECS task result

Congratulations, you have ran your first container on Amazon EC2 Container Service.

In the next article we will see how to go further by creating a service based on the existing container image. Thus several containers will be launched and will be able to respond to your requests

]]>
https://blog.corexpert.net/en/2018/01/17/first-steps-with-amazon-ec2-container-service-101-cluster-task/feed/ 0
TeamWork announces the acquisition of Corexpert, the french AWS pure player, to accelerate its growth in the Cloud Computing market in France and worldwide https://blog.corexpert.net/en/2017/09/25/teamwork-announces-the-acquisition-of-corexpert-the-french-aws-pure-player-to-accelerate-its-growth-in-the-cloud-computing-market-in-france-and-worldwide/ Mon, 25 Sep 2017 07:06:39 +0000 https://blog.corexpert.net/?p=397 Read More Read More

]]>
Paris, September 25th, 2017 – TeamWork, specialized in consulting, integrating, outsourcing of SAP solutions and technology platforms, announces the acquisition of Corexpert, Amazon Web Services specialist and pure player. This unique complementarity allows TeamWork to strengthen its position in the cloud computing market in France and worldwide.

Amazon Web Services (AWS), a cloud computing pioneer, provides a wide range of services enabling all types of customers to accelerate the deployment of their infrastructures and business applications. It is also the only cloud computing provider meeting all performance and security requirements to be SAP-certified. Thus, project deployments are accelerated and customers benefit from agility and flexibility, so far reserved for the web industry.

Known as an expert in consulting, designing and building AWS Cloud infrastructures, Corexpert brings its expertise, experience and DevOps approach to TeamWork. Certified AWS Advanced Consulting Partner, Corexpert technical team develops, deploys and maintains in operational condition solutions for their customers, based on the most innovative managed services from Amazon Web Services, while continuously optimizing TCO.

TeamWork, a strategic partner in digital transformation, operates in four core businesses: Business Consulting, SAP Business Solutions, Technology Platforms and Data Analytics. Recognized by its clients for its expertise and experience, TeamWork supports both major international accounts and SMEs. TeamWork’s 15 international locations allow it to guarantee 24/7 technical and functional outsourcing support services and geographical proximity to its customers.

“Corexpert has allowed us to accelerate our deployment on the AWS Cloud and the Time to Market of Orinox’s SaaS Graphics Workstation (OCWS) offer. The synergy between TeamWork and Corexpert confirms their ability to support us in our client projects (large French and American energy companies) both internationally and with 24/7 support services” Maxime Fourreau, founder and CEO of Orinox.

The acquisition of Corexpert by TeamWork is part of a strategy to develop an AWS Cloud competence center to address emerging new needs, such as IoT, Big Data and Blockchain, which are growth levers for digital transformation. The collaboration with Corexpert, started several months ago, also enabled TeamWork to consolidate its position as an SAP integrator on the AWS platform.

« The acquisition of Corexpert by TeamWork Group was obvious for two main reasons : our development and partnership with Amazon Web Services, which started with SAP on HANA projects, and the values shared by our two structures. The perfect complementarity of our expertise will allow our customers to pursue or accelerate their digital transformation, in particular with the adoption of the AWS Cloud. » Philippe Rey-Gorrez, CEO of TeamWork Group

« Majority participation of TeamWork Group in the capital of Corexpert is a strong signal to all our local and international customers. Cloud computing projects are increasingly ambitious and require larger and multidisciplinary teams. Corexpert proves its technological and innovative value daily, and joining TeamWork will enable a faster and more structured development to meet a demanding market with intermediate-sized partners. » Alexis Daguès, Co-founder and CEO of Corexpert

TeamWork and Corexpert will discuss their partnership at the USF convention on 4 and 5 October at the Lille Grand Palais and during the Transformation Day on November 8 at the Salle de la Mutualité in Paris.
TeamWork and Corexpert will also be attending the AWS re:Invent in Las Vegas from November 27 to December 1st, 2017, bringing together more than 40,000 Amazon Web Services customers and partners.

About TeamWork

TeamWork, international group, founded in 1999 in Geneva, is an independent company involved in four core businesses: Business Consulting, SAP Business Solutions, Technology Platforms and Data Analytics. Recognized by its clients for its expertise and experience, TeamWork is a strategic partner in digital transformation, accompanying both major international accounts and SMEs. With more than 420 employees and 15 international locations (Switzerland, France, Vietnam, Singapore, China, India, Canada, United States), TeamWork stands out with its ability to guarantee 24/7 technical and functional outsourcing support services and geographical proximity to its customers. The competence, the taste for the challenge, the human values and the talent of its teams allow TeamWork to achieve linear growth and strong customer loyalty. For more information, visit www.teamwork.net

About Corexpert

Corexpert, a pure player of the AWS Cloud, was founded in 2006. Cloud Native company, its team of 20 collaborators consists of Developers, DevOps and Cloud Computing Architects. They have built their expertise on multiple customer success projects, in particular with the Amazon Web Services Cloud that Corexpert has been using since 2010.
AWS Partner since 2014, then Advanced Consulting Partner in January 2017, Corexpert assists its customers in the usage of Cloud computing with a constant support, and ensures the skill improvement of customer teams in the daily usage of AWS managed services. Actors of the transformation of IT organizations, Corexpert team develops the best AWS Cloud strategy with their customers, accompanies them in the adoption of DevOps, automates their cloud computing infrastructures and offers dynamic and proactive supervision.
www.corexpert.net

Press Contacts

COREXPERT
Alexis Daguès
a.dagues@corexpert.net
+33 (0) 6.67.45.38.35

TEAMWORK
Daphne Savoundiraradjane
daphne.savoundiraradjane@teamwork.net
+33 (0) 6.38.46.07.40

pdf-icon1

Download PDF source

]]>
First steps with Amazon EC2 Container Service (101) – ECR https://blog.corexpert.net/en/2017/08/30/first-steps-with-amazon-ec2-container-service-101-ecr/ https://blog.corexpert.net/en/2017/08/30/first-steps-with-amazon-ec2-container-service-101-ecr/#respond Wed, 30 Aug 2017 07:33:50 +0000 https://blog.corexpert.net/?p=382 Read More Read More

]]>
Docker is a technology that has been doing a huge echo in the IT field since the past few years.

docker-vs-aws on google search
docker-vs-aws on google search

All major public clouds (Amazon Web Service, Google Cloud, Microsoft Azure) offer a more or less integrated solution for container management. In these posts we will explain how to start successfully on the managed service Amazon EC2 Container Service.

This serie suppose that you already have created a VPC where the host instances for the docker containers will be created, that the AWS CLI is installed on your computer and we are going to use the “Hello-world” image (dockercloud/hello-world) available on the docker hub.

1 – Creation of a Docker private repository

To be deployed , a container image must be available in a docker repository. There is multiple solution that can be used , Amazon Web services (AWS) proposes a private managed repository (ECR) at an interesting price (0,1$/GB/month on 01/08/2017).

To deploy our image on the ECR (Amazon EC2 Container Registry) we will start by pulling this image from the docker hub.

docker pull dockercloud/hello-world

Now we have our image, we will push it on ECR
Then, we will connect to the AWS web console for ecs

1a – if you still haven’t use the service yet

You will find yourself on the “Get started” page, you just have to push the blue “get started“ button in the middle of the page.

On the next screen (Getting Started with Amazon EC2 Container Service), uncheck the box so the ECS demo (sample) will not be deployed

getstarted

1b – If you have already used ECS

You just have to go to “Repositories” then click on “Create Repository”

2 – On the next page

You will be able to give a name to your repository , helloworld on our case. Then you can click on “Next step” button

ecr-name

3 – The final page

It will give us all the information needed to use the created repository.

2 – Add an image to ECR

We are going to push our HelloWorld image to the repository. To do that we are going to connect on ECR from our machine (our repository is in ireland)

aws ecr get-login --no-include-email --region eu-west-1
 docker login -u AWS -p eyJwRXl…...iMSIsZnR5cGUKOiJEQWRBX0tFWSJ8 https://123456789012.dkr.ecr.eu-west-1.amazonaws.com

We get a commande line that will allow us to connect to ECR

docker login -u AWS -p eyJwRXl…...iMSIsZnR5cGUKOiJEQWRBX0tFWSJ8 https://123456789012.dkr.ecr.eu-west-1.amazonaws.com
 Login Succeeded

Now we are connected, we are able the push our local image

docker tag dockercloud/hello-world:latest 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helloworld:latest
docker push 123456789012.dkr.ecr.eu-west-1.amazonaws.com/helloworld:latest

If you connect on the AWS web console , you will be able to see our image in the repository

ecr-push

In the next post we are going to use this image to launch a docker container on ECS.

See you soon

]]>
https://blog.corexpert.net/en/2017/08/30/first-steps-with-amazon-ec2-container-service-101-ecr/feed/ 0