Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow custom models in headlines and descriptions #15

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

atomasmackey
Copy link
Collaborator

Allow custom models in headlines and descriptions

… headline and description generation separately
Copy link
Collaborator

@JuanGriggio JuanGriggio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ale, deje unos comments, si queres después lo vemos

Comment on lines +38 to +40
self.client_id=None
self.client_secret=None
self.refresh_token=None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Estas 3 lineas no hacen falta

Comment on lines -49 to +64
client_id = config.get('client_id')
client_secret = config.get('client_secret')
refresh_token = config.get('refresh_token')
self.client_id = config.get('client_id')
self.client_secret = config.get('client_secret')
self.refresh_token = config.get('refresh_token')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No hace falta que sean variables de instancia (no hace falta el self.)


if not client_id or not client_secret or not refresh_token:
if not self.client_id or not self.client_secret or not self.refresh_token:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No hace falta que sean variables de instancia (no hace falta el self.)

Comment on lines -57 to +73
'client_id': client_id,
'client_secret': client_secret,
'refresh_token': refresh_token,
'client_id': self.client_id,
'client_secret': self.client_secret,
'refresh_token': self.refresh_token,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No hace falta que sean variables de instancia (no hace falta el self.)

creds, _ = default(scopes=API_SCOPES)
self.is_authentication_method_client_id=False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No hace falta

self.client_id=None
self.client_secret=None
self.refresh_token=None
self.is_authentication_method_client_id='unauthenticated'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mejor inicializarlo así: self.is_authentication_method_client_id=False o self.is_authentication_method_client_id=None. De esta forma, la variable es de un solo tipo (bool)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

En general la idea está muy bien, pero yo no escribiría todo el código en este archivo. El ContentGeneratorService no debería saber a qué endpoint le tiene que pegar, ni encargarse de temas de autenticación. Para eso están los AuthenticationHelper y GeminiHelper. Trataría de reestructurarlo de manera que cada servicio se ocupe de su propósito y nada más.

Comment on lines -121 to +159
response = self.model.generate_content(
prompt,
generation_config=GENERATION_CONFIG,
safety_settings=SAFETY_SETTINGS
)
start_idx = response.text.index('[')
end_idx = response.text.index(']')
time.sleep(TIME_INTERVAL_BETWEEN_REQUESTS)

return ast.literal_eval(response.text[start_idx:end_idx+1])
if custom_endpoint:
# Custom endpoint request
payload = {
"contents": {
"role": "USER",
"parts": {"text": prompt}
},
"generation_config": GENERATION_CONFIG
}
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json"
}
response = requests.post(custom_endpoint, json=payload, headers=headers)


if response.status_code != 200:
raise Exception(f"Custom endpoint request failed with status code {response.status_code}")
response_data = response.json()
response_data = response_data['candidates'][0]['content']['parts'][0]['text']
start_idx = response_data.index('[')
end_idx = response_data.index(']')
return ast.literal_eval(response_data[start_idx:end_idx+1])
else:
response = self.model.generate_content(
prompt,
generation_config=GENERATION_CONFIG,
safety_settings=SAFETY_SETTINGS
)
start_idx = response.text.index('[')
end_idx = response.text.index(']')
time.sleep(TIME_INTERVAL_BETWEEN_REQUESTS)
return ast.literal_eval(response.text[start_idx:end_idx+1])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Todo esto está muy bien, pero lo haría un poco diferente. En una función aparte, supongamos __call_gemini, haría que se llame al endpoint personalizado o al estándar según la configuración y retorne siempre lo mismo. De esta manera, el código de esta parte queda casi igual que antes, no se repite el procesado de la respuesta de gemini y es más prolijo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants