knox / Distroless-1.md
0 likes
0 forks
1 files
Last active
Image | Tags | Architecture Suffixes |
---|---|---|
gcr.io/distroless/static-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64, arm, s390x, ppc64le |
gcr.io/distroless/base-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64, arm, s390x, ppc64le |
gcr.io/distroless/base-nossl-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64, arm, s390x, ppc64le |
gcr.io/distroless/cc-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64, arm, s390x, ppc64le |
gcr.io/distroless/python3-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64 |
gcr.io/distroless/java-base-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64, s390x, ppc64le |
gcr.io/distroless/java17-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64, s390x, ppc64le |
gcr.io/distroless/java21-debian12 | latest, nonroot, debug, debug-nonroot | amd64, arm64, ppc64le |
knox / Jina-16.py
0 likes
0 forks
1 files
Last active
1 | # Server |
2 | with Deployment(uses=TokenStreamingExecutor, port=12345, protocol='grpc') as dep: |
3 | dep.block() |
4 | |
5 | |
6 | # Client |
7 | async def main(): |
8 | client = Client(port=12345, protocol='grpc', asyncio=True) |
9 | async for doc in client.stream_doc( |
10 | on='/stream', |
knox / Jina-15.py
0 likes
0 forks
1 files
Last active
1 | @requests(on='/stream') |
2 | async def task(self, doc: PromptDocument, **kwargs) -> ModelOutputDocument: |
3 | input = tokenizer(doc.prompt, return_tensors='pt') |
4 | input_len = input['input_ids'].shape[1] |
5 | for _ in range(doc.max_tokens): |
6 | output = self.model.generate(**input, max_new_tokens=1) |
7 | if output[0][-1] == tokenizer.eos_token_id: |
8 | break |
9 | yield ModelOutputDocument( |
10 | token_id=output[0][-1], |
knox / Jina-14.py
0 likes
0 forks
1 files
Last active
1 | from transformers import GPT2Tokenizer, GPT2LMHeadModel |
2 | |
3 | |
4 | class TokenStreamingExecutor(Executor): |
5 | def __init__(self, **kwargs): |
6 | super().__init__(**kwargs) |
7 | self.model = GPT2LMHeadModel.from_pretrained('gpt2') |
knox / Jina-13.py
0 likes
0 forks
1 files
Last active
1 | from docarray import BaseDoc |
2 | |
3 | |
4 | class PromptDocument(BaseDoc): |
5 | prompt: str |
6 | max_tokens: int |
7 | |
8 | |
9 | class ModelOutputDocument(BaseDoc): |
10 | token_id: int |
knox / Jina-12.sh
0 likes
0 forks
1 files
Last active
1 | jina cloud deploy jcloud-flow.yml |
knox / Jina-11.sh
0 likes
0 forks
1 files
Last active
1 | jina export docker-compose flow.yml docker-compose.yml |
2 | docker-compose up |
knox / Jina-10.sh
0 likes
0 forks
1 files
Last active
1 | jina export kubernetes flow.yml ./my-k8s |
2 | kubectl apply -R -f my-k8s |
knox / Jina-8.yaml
0 likes
0 forks
1 files
Last active
1 | # config.yml |
2 | jtype: TextToImage |
3 | py_modules: |
4 | - executor.py |
5 | metas: |
6 | name: TextToImage |
7 | description: Text to Image generation Executor |