0%

bike-festival-2024-backend (github.com)

DB Schema

User Table

FieldTypeGORM OptionsJSON KeyDescription
IDstringtype:varchar(36);primary_keyidThe unique identifier for the user. (from line account)
Namestringtype:varchar(255);indexnameThe name of the user. (from line account)

User-Event Table

FieldTypeDescription
user_idvarchar(36)The ID of the user, linking to User.ID.
event_idvarchar(36)The ID of the event, linking to Event.ID.

Event Table

FieldTypeGORM OptionsJSON KeyRedis KeyDescription
ID*stringtype:varchar(36);primary_keyididThe event ID defined at the frontend. If not provided, it is calculated by the hash of event detail and event time.
EventTimeStart*time.Timetype:timestampevent_time_startevent_time_startThe start time of the event.
EventTimeEnd*time.Timetype:timestampevent_time_endevent_time_endThe end time of the event.
EventDetail*stringtype:varchar(1024)event_detailevent_detailThe details of the event, stored in JSON format. This is parsed when sending to the line message API.

心理測驗統計

  • 結果種類儲存
  • 統計趴數
FieldTypeGORM OptionsDescription
Typestringtype:varchar(255);uniqueThe unique type of the psycho test.
Countinttype:intThe count associated with the test.

API

  • Add type
  • Retrieve statistic result

Line

Official Document

Tutorial

Line Login Integration Tutorial

Push Line Flex Message

Asynq

Add Scheduled Task

Cancel Scheduled Task

Optimization

Get Event By EventID

DB only

(2000 virtual users, for 1 mins)

2024-02-18T205350

Redis Cache + DB

(2000 virtual users, for 1 mins)

2024-02-18T205401
1
2
3
4
5
6
7
8
type EventCache struct {
ID string `json:"id" redis:"id"`
EventTimeStart time.Time `json:"event_time_start" redis:"event_time_start"`
EventTimeEnd time.Time `json:"event_time_end" redis:"event_time_end"`
EventDetail string `json:"event_detail" redis:"event_detail"`
CreatedAt time.Time `json:"created_at" redis:"created_at"`
UpdatedAt time.Time `json:"updated_at" redis:"updated_at"`
}

部署

Nginx Setup

Nginx Reverse Proxy

[!info]

要把 ssl_certificate & ssl_certificate_key 那邊的 domain 改成你自己的 (for Certbot)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name nckubikefestival.ncku.edu.tw;

    ssl_certificate /etc/letsencrypt/live/nckubikefestival.ncku.edu.tw/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/nckubikefestival.ncku.edu.tw/privkey.pem; # managed by Certbot
    ssl_ecdh_curve X25519:secp384r1;
    ssl_session_cache shared:SSL:50m;
    ssl_session_timeout 1440m;
    ssl_session_tickets off;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers TLS13-AES-256-GCM-SHA384:TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-128-CCM-8-SHA256:TLS13-AES-128-CCM-SHA256:EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+ECDSA+AES128:EECDH+aRSA+AES128:RSA+AES128:EECDH+ECDSA+AES256:EECDH+aRSA+AES256:RSA+AES256:EECDH+ECDSA+3DES:EECDH+aRSA+3DES:RSA+3DES:!MD5;
    ssl_prefer_server_ciphers on;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/nckubikefestival.ncku.edu.tw/chain.pem;
    add_header Strict-Transport-Security "max-age=31536000; preload";


    # Forward https://nckubikefestival.ncku.edu.tw/api/<path> to http://localhost:8000/<path>
    # For Golang Backend

    location /api/ {

        proxy_pass http://localhost:8000/;

        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_set_header X-Forwarded-Proto $scheme;

    }

    # Forward https://nckubikefestival.ncku.edu.tw/* to http://localhost:5173/*
    # For Vue Frontend

    location / {

        proxy_pass http://localhost:5173/;

        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;

        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_set_header X-Forwarded-Proto $scheme;

    }
}

CertBot

1
2
3
4
5
6
7
8
sudo apt  install certbot
sudo apt-get install python3-certbot-nginx

# 申請憑證
sudo certbot --nginx --email peterxcli@gmail.com --agree-tos -d nckubikefestival.ncku.edu.tw

# 安裝憑證 ( cert-name 要跟 nginx的 config 檔的 server_name 一樣)
sudo certbot install --cert-name nckubikefestival.ncku.edu.tw

Bug

Line login redirect

[!warning] the bug is due to the referer-policy the default policy is strict-origin-when-cross-origin

In my case, I use the additional redirect_path(which is set in query string ``) to compose the frontend redirect path: 2024-02-21T155843

It works fine when I am developing at my local computer, but in the production environment, it always redirect user to the page with duplicate path, like: /bikefest/main-stagebikefest/main-stage/

Then I discover that in my local development environment, the request referer only contain the domain name(localhost:5173), but the production send its full path and query string to the backend server.

And that the reason is: in dev env, the frontend is at localhost:5173 and the backend is at localhost:8000, the trigger the default referer policy strict-origin-when-cross-origin only send the localhost:8000 as the referer value. In prod env, the frontend and backend have the same domain but only differ at the path, so the refer default policy send origin, path, query as the referer value, and frontend also send its windows.location.path as redirected_path query string, then backend compose the referer, redirect_path, and the result would be like `https:///windows.location.path/windows.location.path. And that is the main reason why the production appear the page with duplicate path.

To resolve this problem, we only needs to set the referer policy in the nginx configuration, and let the referer only include origin to prevent the above issue:

1
2
3
4
5
6
7
8
server {
...

# Set the Referrer-Policy header
add_header Referrer-Policy "origin";

...
}

Reference

This browser does not support PDFs. Please download the PDF to view it: Download PDF

Horizontal Pod Autoscaler

deployment placement

the container -> resource property must set, otherwise the scaler cant get your container runtime metrics

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
kind: Deployment
apiVersion: apps/v1
metadata:
name: <deployment name>
namespace: <namespace>
labels:
app: <deployment name>
k8s-app: <deployment name>
annotations:
deployment.kubernetes.io/revision: '21'
spec:
selector:
matchLabels:
k8s-app: <deployment name>
template:
metadata:
name: <deployment name>
creationTimestamp: null
labels:
app: <deployment name>
k8s-app: <deployment name>
spec:
containers:
- name: <deployment name>
image: <container-registery/image>
ports:
- name: http
containerPort: 9000
protocol: TCP
- name: grpc
containerPort: 9001
protocol: TCP
resources:
limits:
cpu: 200m
# memory: 1024Mi
requests:
cpu: 50m
# memory: 128Mi
imagePullPolicy: Always
securityContext:
privileged: false
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
imagePullSecrets:
- name: ghcr
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600

hpa placement

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: <hpa-name>
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: <your desired deployment name>
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50 # unit: percent
2023-12-20T022516

gitlab runner auto rollout to kubernetes cluster

gitlab -> runner -> auto trigger kubectl roll update (/home/ubuntu/.kube)

runner on kubernetes

if we want to let multiple repo run on same gitlab runner, we need to use register group runner (need to create a group)

create agent config file in project (optional)

To create an agent configuration file:

Choose a name for your agent. The agent name follows the DNS label standard from RFC 1123. The name must: - Be unique in the project. - Contain at most 63 characters. - Contain only lowercase alphanumeric characters or -. - Start with an alphanumeric character. - End with an alphanumeric character. - In the repository, in the default branch, create an agent configuration file at the root:

1
.gitlab/agents/<agent-name>/config.yaml

You can leave the file blank for now, and configure it later.

Register the agent with GitLab

1. Select Operate > Kubernetes clusters

2023-12-20T022538

2. Select Connect a cluster (agent)

* If you want to create a configuration with CI/CD defaults, type a name.
* If you already have an agent configuration file, select it from the list.
2023-12-20T022604

3. if you already have config file in your project, selec one. otherwise create a new one

2023-12-20T022614

4. get the access token

2023-12-20T022625

Update your .gitlab-ci.yml file to run kubectl commands

1. install kubernetes agent in your cluster through helm with the provided token

1
2
3
4
5
6
7
8
helm repo add gitlab https://charts.gitlab.io
helm repo update
helm upgrade --install <xxxxx> gitlab/gitlab-agent \
--namespace gitlab-agent-<xxxxx> \
--create-namespace \
--set image.tag=v16.5.0 \
--set config.token=glagent-YoxxFv-5HxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxUeA \
--set config.kasAddress=wss://kas.gitlab.com

2. check if gitlab has connected to the cluster agent

2023-12-20T021043

3. run kubectl in .gitlab-ci.yml

1
2
3
4
5
6
7
8
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods

notice: If you are not sure what your agent’s context is, open a terminal and connect to your cluster. Run kubectl config get-contexts. (in my case, I need to execute the command directly in the gitlab ci triggered job)

2023-12-20T021027
  1. push the change to trigger the gitlab workflow and run your own custom kubectl command in the gitlab ci

reference

RBAC

github repo

Model

model.go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
var (
Viewer = "GET"
Editor = "^(POST|PUT|PATCH)$"
Admin = "*"
)

type User struct {
ID uint `gorm:"primaryKey;autoIncrement"`
Name string `gorm:"type:varchar(255);not null;uniqueIndex"`
Roles []Role `gorm:"many2many:user_roles;constraint:OnUpdate:CASCADE,OnDelete:SET NULL;"`
}

type Role struct {
ID uint `gorm:"primaryKey;autoIncrement"`
Name string `gorm:"type:varchar(255);not null;uniqueIndex"`
Permissions []Permission `gorm:"many2many:role_permissions;constraint:OnUpdate:CASCADE,OnDelete:SET NULL;"`
}

type Permission struct {
ID uint `gorm:"primaryKey;autoIncrement"`

// GET, POST
Action string `gorm:"type:varchar(255);not null;index:idx_action_route;check:action_check"`

// e.g. /api/v1/user, /api/v1/user/*
Route string `gorm:"type:varchar(255);not null;index:idx_action_route;check:route_check"`

// Allow or Deny
Allowed bool `gorm:"not null"`
}

func (Permission) TableName() string {
return "permissions"
}

func (Permission) CheckConstraints(db *gorm.DB) {
db.Exec("ALTER TABLE permissions ADD CONSTRAINT action_check CHECK (Action IN ('GET', 'POST', 'PUT', 'PATCH', 'DELETE'))")
db.Exec("ALTER TABLE permissions ADD CONSTRAINT route_check CHECK (Route LIKE '/%')")
}

當用戶登入後,他將會收到一個token。這個token應該在之後的請求中都包含在 HTTP 的 Authorization header 中,以進行驗證。

以下是 Gin 在收到請求後如何實作檢查權限:

Authenticate Middleware

1. 驗證Token:

middleware.AuthMiddleware 中,首先從請求的 Authorization header 中取出 token。之後會使用 parseToken 函數來解析這個 token 並取得用戶的 ID。 若 token 不存在、不合法或過期,返回401 Unauthorized錯誤。否則,將 userId 設置在 Gin 的 context 中,以便於後續的 middleware 或處理函數中使用。

2. 檢查權限:

middleware.CheckPermission中間件會使用之前在context中設置的userId來從資料庫中取得用戶的資訊及其角色和權限 每一個角色都有其對應的一組權限,權限確定了使用者可以訪問的 HTTP method 和 route

透過兩個for迴圈,檢查使用者所屬的所有角色和其相對應的權限,看是否匹配當前 request 的 HTTP method 和 route

如果找到匹配的權限且該權限被允許,將繼續執行後續的處理函數;否則,返回 403 Permission Denied

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
func CheckPermission(db *gorm.DB) gin.HandlerFunc {
return func(c *gin.Context) {
matchedAny := false
userId := c.MustGet("userId").(uint) // get the userId which is this after token authentication
var user models.User
db.Preload("Roles").Preload("Roles.Permissions").Where("ID = ?", userId).First(&user)

for _, role := range user.Roles {
for _, permission := range role.Permissions {
if matches(c.Request.Method, permission.Action) && matches(c.Request.URL.Path, permission.Route) {
matchedAny = true
if !permission.Allowed {
c.AbortWithStatusJSON(403, gin.H{"error": "Permission denied"})
return
}
}
}
}
if !matchedAny {
c.AbortWithStatusJSON(403, gin.H{"error": "Permission denied"})
return
}
c.Next()
}
}

// case insensitive regex matching
func matches(requestValue, patternValue string) (matched bool) {
// case insensitive: https://stackoverflow.com/a/9655186
matched, _ = regexp.MatchString("(?i)"+patternValue, requestValue)
return
}

TypeError: Cannot read properties of null (reading ''insertBefore'')

在寫 vue 的時候遇到了這個問題

原本的寫法是這樣的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<script setup lang="ts">
const getStrongKey = computed(() => {
return currentSong.value?.Key || 0 + Date.now() + Math.random()
})
</script>

<template>
<div class="transition-all duration-300" :class="[_bg_class]" @click="changePageToSinging">
<video
v-if="currentSong?.Category !== songCategory.Youtube"
ref="video"
crossorigin="anonymous"
class="transition-all duration-300"
:loop="loop"
:class="[_video_class]"
:style="{ height: `${videoHeight}` }"
/>
<YoutubeVideo
v-else
:key="`${getStrongKey}`"
:video-id="`${youtubeLinkId}`"
:class-name="`transition-all duration-300 w-full ${[_video_class]}`"
:style="{ height: `${videoHeight}` }"
/>
</div>
</template>

我有在 YoutubeVideo 這個 component 裡面加上 key,但是還是會出現這個錯誤 給你們看一下 YoutubeVideo 的寫法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
<!-- eslint-disable no-console -->
<script setup lang="ts">
import PlayerFactory from 'youtube-player'
import type { YouTubePlayer } from 'youtube-player/dist/types'

const props = defineProps<{
videoId: string
className: string
style: Object
}>()

const songStore = useSongStore()
const functionStore = useFunctionStore()
const pageStore = usePageStore()
const { isPlay, isPlayReady } = storeToRefs(functionStore)
const {midiSpeedMultiple, midiSpeed} = storeToRefs(songStore)

let player: YouTubePlayer

onMounted(() => {
console.log(props)
player = PlayerFactory('ytplayer', {
videoId: props.videoId || '',
height: '100%',
width: '100%',
playerVars: {
autoplay: 1,
controls: 0,
modestbranding: 1,
rel: 0,
},
})
songStore.setCurrentTime(0)
})

onUnmounted(() => {
player.destroy()
})

const getStyle = computed(() => {
if (currentPageIndex.value === pageNames.Singing)
return { ...props.style, width: `${width.value}px` }
return { ...props.style }
})
</script>

<template>
<div :class="className" :style="{ ...getStyle }">
<div id="ytplayer" style="height: 100%; width: 100%;" />
</div>
</template>

好像是因為 <YoutubeVideo />v-else 的條件已經不成立了 然後你只有元件的 key 改的話 他的子元素還會在 只是會抽換父元素

我自己的猜測啦 有可能就是子元素去指向空的父元素才會出錯 那反正之後我就直接換成 <YoutubeVideo /> 外面再包一層 <div /> 就好了 然後改的話 key 就變成是改外層的 <div /> 的 key 據說是這樣就可以刷新整個 dom subtree

改完之後大概長這樣:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<template>
<div class="transition-all duration-300" :class="[_bg_class]" @click="changePageToSinging">
<video
v-if="currentSong?.Category !== songCategory.Youtube"
ref="video"
crossorigin="anonymous"
class="transition-all duration-300"
:loop="loop"
:class="[_video_class]"
:style="{ height: `${videoHeight}` }"
/>
<div v-else :key="`${getStrongKey}`">
<YoutubeVideo
v-if="currentSong?.Category === songCategory.Youtube"
:video-id="`${youtubeLinkId}`"
:class-name="`transition-all duration-300 w-full ${[_video_class]}`"
:style="{ height: `${videoHeight}` }"
/>
</div>
</template>

總之就是能動了 有點通靈... 求 vue 的維護者解釋QQ

problem description

nextjs get token from another site(usually oauth server) and send the token to the next js server(the server which host the nextjs generate the web page and send to your browser) to let it use in getserverprops, then we can protect our route or fetch some auth-need data before render

solution

You can use an HTTP-only cookie to store the token and pass it between the client and server. This approach works with serverless deployments and multiple instances.

Here's a solution using TypeScript and the next-iron-session package to store the token in an encrypted cookie:

  1. Install the required dependencies:
1
npm install next-iron-session cookie
  1. Configure next-iron-session: Create a file iron-session-config.ts in the root folder of your project (my_project/src/iron-session-config.ts): notice: the sessionOption password field need to at least 32 words
1
2
3
4
5
6
7
8
9
10
11
import { SessionOptions } from 'next-iron-session';

const sessionOptions: SessionOptions = {
cookieName: 'nextjs_token_cookie',
password: process.env.SECRET_COOKIE_PASSWORD || 'default_password_please_change_fuck_you_at_least_32_word',
cookieOptions: {
secure: process.env.NODE_ENV === 'production',
},
};

export default sessionOptions;
  1. Update your API route to store the token in the session: Update the pages/api/token.ts file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import { NextApiRequest, NextApiResponse } from 'next';
import { withIronSession } from 'next-iron-session';
import sessionOptions from '../../iron-session-config';

const handler = async (req: NextApiRequest, res: NextApiResponse) => {
if (req.method === 'POST') {
const token = req.body.token;

// Store the token in the session
req.session.set('token', token);
await req.session.save();

res.status(200).json({ message: 'Token stored successfully' });
} else {
res.status(405).json({ message: 'Method not allowed' });
}
};

export default withIronSession(handler, sessionOptions);
  1. Update getServerSideProps to retrieve the token from the session:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import { GetServerSideProps } from 'next';
import { withIronSession } from 'next-iron-session';
import sessionOptions from '../iron-session-config';

export const getServerSideProps: GetServerSideProps = withIronSession(
async ({ req, res }) => {
// Retrieve the token from the session
const token = req.session.get('token');

// Use the token to fetch data or perform other actions on the server-side

return {
props: {
// Pass any data you fetched to your component as props
},
};
},
sessionOptions,
);

  1. Then you can send your auth token to save it in the web server after you login
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
async function sendTokenToServer(token: string): Promise<void> {
const response = await fetch('/api/token', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ token }),
credentials: 'include', // Include session cookie
});

if (response.ok) {
console.log('Token stored successfully');
} else {
console.error('Failed to store token');
}
}

export default function Login() {
const handleSubmit = async (e: React.SyntheticEvent<HTMLFormElement>) => {
e.preventDefault();
const response = await login(email, password);
token = getTokenFromResponse(response)

// ... the rest of the code remains the same

await sendTokenToServer(token);

// ... the rest of the code remains the same
};
}

pull private container registry from gitlab

  1. to your repo > setting > repository

  2. expand Deploy Token and set tokennam, username and scope

  3. click create deploy token

  4. then run:

    1
    kubectl create secret docker-registry <secrect_name> --docker-server=registry.gitlab.com --docker-username=<username> --docker-password=<gitlab_token> -n <namespace>

  5. in deployment or image spec:

    • add imagePullSecrets

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    spec:
    containers:
    - name: <arbitrary name>
    image: registry.gitlab.com/<username>/<container-registry-path>:<tag>
    imagePullPolicy: Always
    securityContext:
    privileged: false
    imagePullSecrets:
    - name: <secrect_name>
    restartPolicy: Always
    dnsPolicy: ClusterFirst
    nodeSelector: #this can be use to specify node
    kubernetes.io/hostname: <node-name>
    schedulerName: default-scheduler

pull private image from github registry (ghcr.io)

  1. apply a personal access token

  2. then run:

    1
    kubectl create secret docker-registry <secrect_name> --docker-server=ghcr.io --docker-username=<github-username> --docker-password=<personal-access-token> -n <namespace>

  3. in deployment or image spec:

    • add imagePullSecrets

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    spec:
    containers:
    - name: <arbitrary name>
    image: ghcr.io/<username>/<container-registry-path>:<tag>
    imagePullPolicy: Always
    securityContext:
    privileged: false
    imagePullSecrets:
    - name: <secrect_name>
    restartPolicy: Always
    dnsPolicy: ClusterFirst
    nodeSelector: #this can be use to specify node
    kubernetes.io/hostname: <node-name>
    schedulerName: default-scheduler