-
Notifications
You must be signed in to change notification settings - Fork 14.6k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' of github.com:antoooks/website
- Loading branch information
Showing
109 changed files
with
3,602 additions
and
1,177 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
126 changes: 126 additions & 0 deletions
126
content/en/blog/_posts/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,126 @@ | ||
--- | ||
layout: blog | ||
title: 'Kyma - extend and build on Kubernetes with ease' | ||
date: 2019-05-23 | ||
--- | ||
|
||
**Authors:** Lukasz Gornicki (SAP) | ||
|
||
According to this recently completed [CNCF Survey](https://www.cncf.io/blog/2018/08/29/cncf-survey-use-of-cloud-native-technologies-in-production-has-grown-over-200-percent/), the adoption rate of Cloud Native technologies in production is growing rapidly. Kubernetes is at the heart of this technological revolution. Naturally, the growth of cloud native technologies has been accompanied by the growth of the ecosystem that surrounds it. Of course, the complexity of cloud native technologies have increased as well. Just google for the phrase “Kubernetes is hard”, and you’ll get plenty of articles that explain this complexity problem. The best thing about the CNCF community is that problems like this can be solved by smart people building new tools to enable Kubernetes users: Projects like Knative and its [Build resource](https://github.com/knative/build) extension, for example, serve to reduce complexity across a range of scenarios. Even though increasing complexity might seem like the most important issue to tackle, it is not the only challenge you face when transitioning to Cloud Native. | ||
|
||
## Problems to solve | ||
|
||
### Picking the right technology is hard | ||
|
||
Now that you understand Kubernetes, your teams are trained and you’ve started building applications on top, it’s time to face a new layer of challenges. Cloud native doesn’t just mean deploying a platform for developers to build on top of. Developers also need storage, backup, monitoring, logging and a service mesh to enforce policies upon data in transit. Each of these individual systems must be properly configured and deployed, as well as logged, monitored and backed up on its own. The CNCF is here to help. We provide a [landscape](https://landscape.cncf.io/) overview of all cloud-native technologies, but the list is huge and can be overwhelming. | ||
|
||
This is where [Kyma](http://kyma-project.io) will make your life easier. Its mission statement is to enable a flexible and easy way of extending applications. | ||
|
||
<img src="/images/blog/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease/kyma-center.png" width="40%" alt="Kyma in center" /> | ||
|
||
This project is designed to give you the tools you need to be able to write an end-to-end, production-grade cloud native application. [Kyma](https://github.com/kyma-project/kyma/) was donated to the open-source community by [SAP](https://www.sap.com); a company with great experience in writing production-grade cloud native applications. That’s why we’re so excited to -- [announce](https://twitter.com/kymaproject/status/1121426458243678209) the first major release of [Kyma 1.0](https://github.com/kyma-project/kyma/releases/tag/1.0.0)! | ||
|
||
### Deciding on the path from monolith to cloud-native is hard | ||
|
||
Try Googling `monolith to cloud native` or `monolith to microservices` and you’ll get a list of plenty of talks and papers that tackle this challenge. There are many different paths available for migrating a monolith to the cloud, and our experience has taught us to be quite opinionated in this area. First, let's answer the question of why you’d want to move from monolith to cloud native. The goals driving this move are typically: | ||
|
||
- Increased scalability. | ||
- Faster implementation of new features. | ||
- More flexible approach to extensibility. | ||
|
||
You do not have to rewrite your monolith to achieve these goals. Why spend all that time rewriting functionality that you already have? Just focus on enabling your monolith to support [event-driven architecture](https://en.wikipedia.org/wiki/Event-driven_architecture). | ||
|
||
## How does Kyma solve your challenges? | ||
|
||
### What is Kyma? | ||
|
||
[Kyma](https://kyma-project.io/docs/root/kyma/#overview-overview) runs on Kubernetes and consists of a number of different components, three of which are: | ||
|
||
- [Application connector](https://kyma-project.io/docs/components/application-connector/) that you can use to connect any application with a Kubernetes cluster and expose its APIs and Events through the [Kubernetes Service Catalog](https://github.com/kubernetes-incubator/service-catalog). | ||
- [Serverless](https://kyma-project.io/docs/components/serverless/) which enables you to easily write extensions for your application. Your function code can be triggered by API calls and also by events coming from external system. You can also securely call back the integrated system from your function. | ||
- [Service Catalog](https://kyma-project.io/docs/components/service-catalog/) is here to expose integrated systems. This integration also enables you to use services from hyperscalers like Azure, AWS or Google Cloud. [Kyma](https://kyma-project.io/docs/components/service-catalog/#service-brokers-service-brokers) allows for easy integration of official service brokers maintained by Microsoft and Google. | ||
|
||
![core components](/images/blog/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease/ac-s-sc.svg) | ||
|
||
You can watch [this video](https://www.youtube.com/watch?v=wJzVWFGkiKk) for a short overview of Kyma key features that is based on a real demo scenario. | ||
|
||
### We picked the right technologies for you | ||
|
||
You can provide reliable extensibility in a project like Kyma only if it is properly monitored and configured. We decided not to reinvent the wheel. There are many great projects in the CNCF landscape, most with huge communities behind them. We decided to pick the best ones and glue them all together in Kyma. You can see the same architecture diagram that is above but with a focus on the projects we put together to create Kyma: | ||
|
||
<img src="/images/blog/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease/arch.png" width="70%" alt="Kyma architecture" /> | ||
|
||
- Monitoring and alerting is based on [Prometheus](https://prometheus.io/) and [Grafana](https://grafana.com/) | ||
- Logging is based on [Loki](https://grafana.com/loki) | ||
- Eventing uses [Knative](https://github.com/knative/eventing/) and [NATS](https://nats.io/) | ||
- Asset management uses [Minio](https://min.io/) as a storage | ||
- Service Mesh is based on [Istio](https://istio.io/) | ||
- Tracing is done with [Jaeger](https://www.jaegertracing.io/) | ||
- Authentication is supported by [dex](https://github.com/dexidp/dex) | ||
|
||
You don't have to integrate these tools: We made sure they all play together well, and are always up to date ( Kyma is already using Istio 1.1). With our custom [Installer](https://github.com/kyma-project/kyma/tree/master/components/installer) and [Helm](https://helm.sh/) charts, we enabled easy installation and easy upgrades to new versions of Kyma. | ||
|
||
### Do not rewrite your monoliths | ||
|
||
Rewriting is hard, costs a fortune, and in most cases is not needed. At the end of the day, what you need is to be able to write and put new features into production quicker. You can do it by connecting your monolith to Kyma using the [Application Connector](https://kyma-project.io/docs/components/application-connector). In short, this component makes sure that: | ||
|
||
- You can securely call back the registered monolith without the need to take care of authorization, as the Application Connector handles this. | ||
- Events sent from your monolith get securely to the Kyma Event Bus. | ||
|
||
At the moment, your monolith can consume three different types of services: REST (with [OpenAPI](https://www.openapis.org/) specification) and OData (with Entity Data Model specification) for synchronous communication, and for asynchronous communication you can register a catalog of events based on [AsyncAPI](https://www.asyncapi.com/) specification. Your events are later delivered internally using [NATS Streaming](https://nats.io/) channel with [Knative eventing](https://github.com/knative/eventing/). | ||
|
||
Once your monolith's services are connected, you can provision them in selected Namespaces thanks to the previously mentioned [Service Catalog](https://kyma-project.io/docs/components/service-catalog/) integration. You, as a developer, can go to the catalog and see a list of all the services you can consume. There are services from your monolith, and services from other 3rd party providers thanks to registered Service Brokers, like [Azure's OSBA](https://github.com/Azure/open-service-broker-azure). It is the one single place with everything you need. If you want to stand up a new application, everything you need is already available in Kyma. | ||
|
||
### Finally some code | ||
|
||
Check out some code I had to write to integrate a monolith with Azure services. I wanted to understand the sentiments shared by customers under the product's review section. On every event with a review comment, I wanted to use machine learning to call a sentiments analysis service, and in the case of a negative comment, I wanted to store it in a database for later review. This is the code of a function created thanks to our [Serverless](https://kyma-project.io/docs/components/serverless) component. Pay attention to my code comments: | ||
|
||
> You can watch [this](https://www.youtube.com/watch?v=wJzVWFGkiKk) short video for a full demo of sentiment analysis function. | ||
```js | ||
/* It is a function powered by NodeJS runtime so I have to import some necessary dependencies. I choosed Azure's CosmoDB that is a Mongo-like database, so I could use a MongoClient */ | ||
const axios = require("axios"); | ||
const MongoClient = require('mongodb').MongoClient; | ||
|
||
module.exports = { main: async function (event, context) { | ||
/* My function was triggered because it was subscribed to customer review event. I have access to the payload of the event. */ | ||
let negative = await isNegative(event.data.comment) | ||
|
||
if (negative) { | ||
console.log("Customer sentiment is negative:", event.data) | ||
await mongoInsert(event.data) | ||
} else { | ||
console.log("This positive comment was not saved:", event.data) | ||
} | ||
}} | ||
|
||
/* Like in case of isNegative function, I focus of usage of the MongoClient API. The necessary information about the database location and an authorization needed to call it is injected into my function and I just need to pick a proper environment variable. */ | ||
async function mongoInsert(data) { | ||
|
||
try { | ||
client = await MongoClient.connect(process.env.connectionString, { useNewUrlParser: true }); | ||
db = client.db('mycommerce'); | ||
const collection = db.collection('comments'); | ||
return await collection.insertOne(data); | ||
} finally { | ||
client.close(); | ||
} | ||
} | ||
/* This function calls Azure's Text Analytics service to get information about the sentiment. Notice process.env.textAnalyticsEndpoint and process.env.textAnalyticsKey part. When I wrote this function I didn't have to go to Azure's console to get these details. I had these variables automatically injected into my function thanks to our integration with Service Catalog and our Service Binding Usage controller that pairs the binding with a function. */ | ||
async function isNegative(comment) { | ||
let response = await axios.post(`${process.env.textAnalyticsEndpoint}/sentiment`, | ||
{ documents: [{ id: '1', text: comment }] }, {headers:{ 'Ocp-Apim-Subscription-Key': process.env.textAnalyticsKey }}) | ||
return response.data.documents[0].score < 0.5 | ||
} | ||
``` | ||
Thanks to Kyma, I don't have to worry about the infrastructure around my function. As I mentioned, I have all the tools needed in Kyma, and they are integrated together. I can quickly get access to my logs through [Loki](https://grafana.com/loki), and I can quickly get access to a preconfigured Grafana dashboard to see the metrics of my Lambda delivered thanks to [Prometheus](https://prometheus.io/) and [Istio](https://istio.io/). | ||
|
||
<img src="/images/blog/2019-05-23-Kyma-extend-and-build-on-kubernetes-with-ease/grafana-lambda.png" width="70%" alt="Grafana with preconfigured lambda dashboard" /> | ||
|
||
Such an approach gives you a lot of flexibility in adding new functionality. It also gives you time to rethink the need to rewrite old functions. | ||
|
||
## Contribute and give feedback | ||
|
||
Kyma is an open source project, and we would love help it grow. The way that happens is with your help. After reading this post, you already know that we don't want to reinvent the wheel. We stay true to this approach in our work model, which enables community contributors. We work in [Special Interest Groups]( | ||
https://github.com/kyma-project/community/tree/master/sig-and-wg) and have publicly recorded meeting that you can join any time, so we have a setup similar to what you know from Kubernetes itself. | ||
Feel free to share also your feedback with us, through [Twitter](https://twitter.com/kymaproject) or [Slack](http://slack.kyma-project.io). |
54 changes: 54 additions & 0 deletions
54
content/en/blog/_posts/Kubernetes-Cloud-Native-and-the-Future-of-Software.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
--- | ||
title: 'Kubernetes, Cloud Native, and the Future of Software' | ||
date: 2019-05-17 | ||
--- | ||
|
||
**Authors:** Brian Grant (Google), Jaice Singer DuMars (Google) | ||
|
||
# Kubernetes, Cloud Native, and the Future of Software | ||
|
||
Five years ago this June, Google Cloud announced a new application management technology called Kubernetes. It began with a [simple open source commit](https://github.com/kubernetes/kubernetes/commit/2c4b3a562ce34cddc3f8218a2c4d11c7310e6d56), followed the next day by a [one-paragraph blog mention](https://cloudplatform.googleblog.com/2014/06/an-update-on-container-support-on-google-cloud-platform.html) around container support. Later in the week, Eric Brewer [talked about Kubernetes for the first time](https://www.youtube.com/watch?v=YrxnVKZeqK8) at DockerCon. And soon the world was watching. | ||
|
||
We’re delighted to see Kubernetes become core to the creation and operation of modern software, and thereby a key part of the global economy. To us, the success of Kubernetes represents even more: A business transition with truly worldwide implications, thanks to the unprecedented cooperation afforded by the open source software movement. | ||
|
||
Like any important technology, Kubernetes has become about more than just itself; it has positively affected the environment in which it arose, changing how software is deployed at scale, how work is done, and how corporations engage with big open-source projects. | ||
|
||
Let’s take a look at how this happened, since it tells us a lot about where we are today, and what might be happening next. | ||
|
||
**Beginnings** | ||
|
||
The most important precursor to Kubernetes was the rise of application containers. Docker, the first tool to really make containers usable by a broad audience, began as an open source project in 2013. By containerizing an application, developers could achieve easier language runtime management, deployment, and scalability. This triggered a sea change in the application ecosystem. Containers made stateless applications easily scalable and provided an immutable deployment artifact that drastically reduced the number of variables previously encountered between test and production systems. | ||
|
||
While containers presented strong stand-alone value for developers, the next challenge was how to deliver and manage services, applications, and architectures that spanned multiple containers and multiple hosts. | ||
|
||
Google had already encountered similar issues within its own IT infrastructure. Running the world’s most popular search engine (and several other products with millions of users) lead to early innovation around, and adoption of, containers. Kubernetes was inspired by Borg, Google’s internal platform for scheduling and managing the hundreds of millions, and eventually billions, of containers that implement all of our services. | ||
|
||
Kubernetes is more than just “Borg, for everyone” It distills the most successful architectural and API patterns of prior systems and couples them with load balancing, authorization policies, and other features needed to run and manage applications at scale. This in turn provides the groundwork for cluster-wide abstractions that allow true portability across clouds. | ||
|
||
The November 2014 [alpha launch](https://cloudplatform.googleblog.com/2014/11/google-cloud-platform-live-introducing-container-engine-cloud-networking-and-much-more.html) of Google Cloud’s [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) introduced managed Kubernetes. There was an explosion of innovation around Kubernetes, and companies from the enterprise down to the startup saw barriers to adoption fall away. Google, Red Hat, and others in the community increased their investment of people, experience, and architectural know-how to ensure it was ready for increasingly mission-critical workloads. The response was a wave of adoption that swept it to the forefront of the crowded container management space. | ||
|
||
**The Rise of Cloud Native** | ||
|
||
Every enterprise, regardless of its core business, is embracing more digital technology. The ability to rapidly adapt is fundamental to continued growth and competitiveness. Cloud-native technologies, and especially Kubernetes, arose to meet this need, providing the automation and observability necessary to manage applications at scale and with high velocity. Organizations previously constrained to quarterly deployments of critical applications can now deploy safely multiple times a day. | ||
|
||
Kubernetes’s declarative, API-driven infrastructure empowers teams to operate independently, and enables them to focus on their business objectives. An inevitable cultural shift in the workplace has come from enabling greater autonomy and productivity and reducing the toil of development teams. | ||
|
||
**Increased engagement with open source** | ||
|
||
The ability for teams to rapidly develop and deploy new software creates a virtuous cycle of success for companies and technical practitioners alike. Companies have started to recognize that contributing back to the software projects they use not only improves the performance of the software for their use cases, but also builds critical skills and creates challenging opportunities that help them attract and retain new developers. | ||
|
||
The Kubernetes project in particular curates a collaborative culture that encourages contribution and sharing of learning and development with the community. This fosters a positive-sum ecosystem that benefits both contributors and end-users equally. | ||
|
||
**What’s Next?** | ||
|
||
Where Kubernetes is concerned, five years seems like an eternity. That says much about the collective innovation we’ve seen in the community, and the rapid adoption of the technology. | ||
|
||
In other ways, it is just the start. New applications such as machine learning, edge computing, and the Internet of Things are finding their way into the cloud native ecosystem via projects like Kubeflow. Kubernetes is almost certain to be at the heart of their success. | ||
|
||
Kubernetes may be most successful if it becomes an invisible essential of daily life, like urban plumbing or electrical grids. True standards are dramatic, but they are also taken for granted. As Googler and KubeCon co-chair Janet Kuo said in a [recent keynote](https://www.youtube.com/watch?v=LAO7RuWwfzA), Kubernetes is going to become boring, and that’s a good thing, at least for the majority of people who don’t have to care about container management. | ||
|
||
At Google Cloud, we’re still excited about the project, and we go to work on it every day. Yet it’s all of the solutions and extensions that expand from Kubernetes that will dramatically change the world as we know it. | ||
|
||
So, as we all celebrate the continued success of Kubernetes, remember to take the time and thank someone you see helping make the community better. It’s up to all of us to foster a cloud-native ecosystem that prizes the efforts of everyone who helps maintain and nurture the work we do together. | ||
|
||
And, to everyone who has been a part of the global success of Kubernetes, thank you. You have changed the world. |
Oops, something went wrong.