Tech Info

  1. Main page
  2. Backend
  3. Main content

Migrate the project from SpringCloud to K8S in 7 days

2023-09-28 315hotness 0likes 0comments

Before, the project used springcloud. The main components used were spring gateway, nacos, minio, load balancer, open-feign, etc. Then we deployed our microservices to virtual machines through docker.

However, for security considerations, it needs to be migrated to Azure AKS (Kubernetes), so spring cloud needs to be reconstructed into spring boot. This way we don't need to maintain security policies for virtual machines ourselves, nor do we need to pay attention to patches.

Combing project structure

The project is organized into microservices. There are about 5 business services and 4 public services. The main reconstruction is concentrated in gateway and auth. The reconstruction of public packages is less, mainly changing open-feign access to call by url instead of by service name as before.

In Kubernetes, we use Traefik2 to replace the gateway function. If you don't know Traefik2, you can check out my previous articles.

At the same time, an authorization interface needs to be provided for authorization, used with Traefik2, so that each request will be authenticated.

Start reconstruction

Determine branch

First of all, we definitely pull a new branch for these changes, even if it doesn't work well, it won't affect others. So let's call the branch name feature/AKS-migrate.

Reconstruct gateway

First, comment out unnecessary dependency packages in the pom file, such as spring cloud gateway, nacos, sentinel and other spring cloud related components. After commenting out, check what errors there are in the code and modify them accordingly.

There are quite a few gateway filters and handlers used in our project. At first I thought that since they all used webflux, maybe I could just introduce this package separately and the code would have minimal changes, right?

After trying this, I found it didn't work, because the project mostly used @RestController. If using the webflux way, many filters would not take effect.

So this approach didn't work either, but the code changes were too much, so I had to rollback to the step of commenting out pom file dependencies.

There was no way. I had to read the corresponding code logic before, and then convert it.

Read the gateway filter code and convert it into a spring filter, just extend org.springframework.web.filter.OncePerRequestFilter, and then move the previous logic over.

Note that global filters need to be placed in common packages.

The handler can also be converted to a filter in the same way, pay attention to the execution order.

Thus, the core code reconstruction was completed and debugging could be started.

Encountered pits

In addition to dealing with the webflux issues mentioned above, after changing springcloud to springboot, our previous configuration file names were bootstrap.yml and bootstrap-dev.yml, but after changing to springboot they had to be changed to application.yml and application-env.yml.

Otherwise you will find that you can't start it, saying that you can't find the file. This pit was also dug by myself.

Then when converting the gateway filter to the spring filter, be sure to ensure that the previous logic is fully transplanted. I encountered a problem that before the conversion I could read the request stream repeatedly, but after the conversion this code reported an error, which was because I did not transplant that logic.

Reconstruct nacos

As mentioned earlier, nacos was mainly used in open-feign calls and variable injection. Feign is easy to change, just specify the url parameter. Then variable injection can also be replaced by Kubernetes ConfigMap and Secret.

So we need to put the variables previously configured in nacos into the configuration file so that the variables can be injected directly through Kubernetes.

We only need one set of code (one image) in each environment. When deploying, just inject different configurations and the same image can be deployed to different environments. This ensures code consistency across environments.

For example, the previous configuration was like this:

spring:
  redis:
    host: 127.0.0.1
    port: 6379
    database: 0
    password: 123456

After reconstruction, the configuration file values are:

spring:
  redis: 
    host: ${env_redis_host}
    port: ${env_redis_port}
    database: ${env_redis_database}
    password: ${env_redis_password}

The variables here are configured through ConfigMap and will be injected into container environment variables, so spring can get the values from environment variables.

Deployment

Previously used Jenkins for deployment, Jenkins was also built by ourselves, now everything is migrated to Azure github, so pipeline is used directly for deployment here. We use helm to manage k8s resources.

For example, the structure generated by helm in my project is as follows:

C:.                 
│  .helmignore      
│  Chart.yaml       
│  values-prod.yaml 
│  values-qa.yaml   
│  values-test.yaml 
│  values.yaml      
│                   
├─charts            
├─config            
│  ├─dev            
│  │      config.yaml
│  │      secret.yaml
│  │                
│  ├─prod           
│  │      config.yaml
│  │      secret.yaml
│  │                
│  ├─qa             
│  │      config.yaml
│  │      secret.yaml
│  │                
│  └─test           
│          config.yaml
│          secret.yaml
│                   
└─templates         
        configmap.yaml
        deployment.yaml
        hpa.yaml    
        secret.yaml 
        service.yaml
        _helpers.tpl

Here you just need to specify different value files when deploying to achieve deploying the same image to different environments.

The contents of config.yaml and secret.yaml under the dev directory are roughly as follows:

# config.yaml  
env_redis_host: localhost
env_redis_port: 6379
env_redis_database: 1


#secret.yaml

env_redis_password: 123456

In the template, configmap.yaml and secret.yaml are mainly about how to convert file contents into corresponding yaml

#values.yaml specifies which files exist 
configOverrides:
  - config/dev/config.yaml 
secretOverrides:
  - config/dev/secret.yaml


# configmap.yaml
apiVersion: v1  
kind: ConfigMap
metadata:
  name: {{ include "think-manifesto.fullname" . }}-configmap
  namespace: {{ .Values.nameSpace }}  
data:
{{- $files := .Files }}
{{- range  .Values.configOverrides }} 
{{- range $key, $value :=  ($files.Get (printf "%s" .) | fromYaml) }}
{{ $key | indent 2 }}: {{ $value | quote }}  
{{- end }}
{{- end }}

# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: {{ include "think-manifesto.fullname" . }}-secret
  namespace: {{ .Values.nameSpace }}
type: Opaque
data: 
{{- $files := .Files }}
{{- range  .Values.secretOverrides }}
{{- range $key, $value :=  ($files.Get (printf "%s" .) | fromYaml) }} 
{{ $key | indent 2 }}: {{ $value | b64enc }}
{{- end }}  
{{- end }}

Finally, here is an example of my deployment.yaml for reference:

apiVersion: apps/v1
kind: Deployment  
metadata:
  name: {{ include "think-manifesto.fullname" . }}
  namespace: {{ .Values.nameSpace }}
  labels:
    {{- include "think-manifesto.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "think-manifesto.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "think-manifesto.selectorLabels" . | nindent 8 }}
      annotations:
        {{- if .Values.configOverrides}}
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} 
        {{- end }}
        {{- if .Values.secretOverrides}}
        checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
        {{- end }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: {{ .Values.service.portName }}
              containerPort: {{ .Values.service.port }}
          envFrom:
            {{- if .Values.configOverrides }}
            - configMapRef:
                name: {{ include "think-manifesto.fullname" . }}-configmap
            {{- end }}
            {{- if .Values.secretOverrides }}
            - secretRef:
                name: {{ include "think-manifesto.fullname" . }}-secret
            {{- end }}
          {{- with .Values.livenessProbe }}
          livenessProbe:
            {{- toYaml . | nindent 12 }}
          {{- end }}
          {{- with .Values.readinessProbe }}
          readinessProbe:
            {{- toYaml . | nindent 12 }} 
          {{- end }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}  
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

I won't give the values.yaml here. Basically whatever the other templates need should be written there.

Compared to Kustomize, it is very convenient to install third-party charts with helm. It has its own repository. Here are the commands I used to install traefik2:

# Add traefik repository
helm repo add traefik   https://traefik.github.io/charts     
# Add Chinese repository
helm repo add stable http://mirror.azure.cn/kubernetes/charts      
helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm repo update
helm repo list

helm install --set deployment.kind=DaemonSet --set namespaceOverride=traefik --set service.enabled=false traefik traefik/traefik

Local verification

When you have written the above chart, if you don't have a local Kubernetes environment (because it may only exist on the server), but you want to verify locally that what you wrote has no issues, you can use the following commands.

// Replace the following variables with your own. chart-name is the chart name, chart-dir is the chart address  
helm template --dry-run --debug --disable-openapi-validation ${chart-name} .\${chart-dir}\ 

Then if you want to install on the k8s environment while the k8s environment is on a remote server, you can package the chart and install it on the server. You can also upload the chart to the server and install it (helm needs to be installed first on the server).

Writing to the end

Thus, the migration from springcloud to k8s cluster is finally completed. Because it was my first time using helm (I used kustomize before), I spent some time on helm, mainly on troubleshooting. But I have to say the helm documentation is well written and very clear.

Then there is the code reconstruction and some configuration issues. Because it migrated to Azure, I'm not very familiar with their pipeline configurations above, but it's good that I can practice their operations directly, which still helped solve some problems.

Related

This article is licensed with Creative Commons Attribution 4.0 International License
Tag: Kubernetes Spring Boot Spring Cloud
Last updated:2023-09-28

jimmychen

This person is a lazy dog and has left nothing

Like
Next article >

Comments

razz evil exclaim smile redface biggrin eek confused idea lol mad twisted rolleyes wink cool arrow neutral cry mrgreen drooling persevering
Cancel

Archives
  • October 2023
  • September 2023
Categories
  • Algorithm
  • Android
  • Backend
  • Embedded
  • Security
Ads

COPYRIGHT © 2023 Tech Info. ALL RIGHTS RESERVED.

Theme Kratos Made By Seaton Jiang