Skip to content

Knowledge Graph Agent Reference

Here we handle generation of use case-specific database prompts and their execution against a database using the database agent.

Dynamic prompt generation for BioCypher knowledge graphs

BioCypherPromptEngine

Source code in biochatter/prompts.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
class BioCypherPromptEngine:
    def __init__(
        self,
        schema_config_or_info_path: Optional[str] = None,
        schema_config_or_info_dict: Optional[dict] = None,
        model_name: str = "gpt-3.5-turbo",
        conversation_factory: Optional[Callable] = None,
    ) -> None:
        """

        Given a biocypher schema configuration, extract the entities and
        relationships, and for each extract their mode of representation (node
        or edge), properties, and identifier namespace. Using these data, allow
        the generation of prompts for a large language model, informing it of
        the schema constituents and their properties, to enable the
        parameterisation of function calls to a knowledge graph.

        Args:
            schema_config_or_info_path: Path to a biocypher schema configuration
                file or the extended schema information output generated by
                BioCypher's `write_schema_info` function (preferred).

            schema_config_or_info_dict: A dictionary containing the schema
                configuration file or the extended schema information output
                generated by BioCypher's `write_schema_info` function
                (preferred).

            model_name: The name of the model to use for the conversation.
                DEPRECATED: This should now be set in the conversation factory.

            conversation_factory: A function used to create a conversation for
                creating the KG query. If not provided, a default function is
                used (creating an OpenAI conversation with the specified model,
                see `_get_conversation`).
        """

        if not schema_config_or_info_path and not schema_config_or_info_dict:
            raise ValueError(
                "Please provide the schema configuration or schema info as a "
                "path to a file or as a dictionary."
            )

        if schema_config_or_info_path and schema_config_or_info_dict:
            raise ValueError(
                "Please provide the schema configuration or schema info as a "
                "path to a file or as a dictionary, not both."
            )

        # set conversation factory or use default
        self.conversation_factory = (
            conversation_factory
            if conversation_factory is not None
            else self._get_conversation
        )

        if schema_config_or_info_path:
            # read the schema configuration
            with open(schema_config_or_info_path, "r") as f:
                schema_config = yaml.safe_load(f)
        elif schema_config_or_info_dict:
            schema_config = schema_config_or_info_dict

        # check whether it is the original schema config or the output of
        # biocypher info
        is_schema_info = schema_config.get("is_schema_info", False)

        # extract the entities and relationships: each top level key that has
        # a 'represented_as' key
        self.entities = {}
        self.relationships = {}
        if not is_schema_info:
            for key, value in schema_config.items():
                # hacky, better with biocypher output
                name_indicates_relationship = (
                    "interaction" in key.lower() or "association" in key.lower()
                )
                if "represented_as" in value:
                    if (
                        value["represented_as"] == "node"
                        and not name_indicates_relationship
                    ):
                        self.entities[sentencecase_to_pascalcase(key)] = value
                    elif (
                        value["represented_as"] == "node"
                        and name_indicates_relationship
                    ):
                        self.relationships[sentencecase_to_pascalcase(key)] = (
                            value
                        )
                    elif value["represented_as"] == "edge":
                        self.relationships[sentencecase_to_pascalcase(key)] = (
                            value
                        )
        else:
            for key, value in schema_config.items():
                if not isinstance(value, dict):
                    continue
                if value.get("present_in_knowledge_graph", None) == False:
                    continue
                if value.get("is_relationship", None) == False:
                    self.entities[sentencecase_to_pascalcase(key)] = value
                elif value.get("is_relationship", None) == True:
                    value = self._capitalise_source_and_target(value)
                    self.relationships[sentencecase_to_pascalcase(key)] = value

        self.question = ""
        self.selected_entities = []
        self.selected_relationships = []  # used in property selection
        self.selected_relationship_labels = {}  # copy to deal with labels that
        # are not the same as the relationship name, used in query generation
        # dictionary to also include source and target types
        self.rel_directions = {}
        self.model_name = model_name

    def _capitalise_source_and_target(self, relationship: dict) -> dict:
        """
        Make sources and targets PascalCase to match the entities. Sources and
        targets can be strings or lists of strings.
        """
        if "source" in relationship:
            if isinstance(relationship["source"], str):
                relationship["source"] = sentencecase_to_pascalcase(
                    relationship["source"]
                )
            elif isinstance(relationship["source"], list):
                relationship["source"] = [
                    sentencecase_to_pascalcase(s)
                    for s in relationship["source"]
                ]
        if "target" in relationship:
            if isinstance(relationship["target"], str):
                relationship["target"] = sentencecase_to_pascalcase(
                    relationship["target"]
                )
            elif isinstance(relationship["target"], list):
                relationship["target"] = [
                    sentencecase_to_pascalcase(t)
                    for t in relationship["target"]
                ]
        return relationship

    def _select_graph_entities_from_question(
        self, question: str, conversation: Conversation
    ) -> str:
        conversation.reset()
        success1 = self._select_entities(
            question=question, conversation=conversation
        )
        if not success1:
            raise ValueError(
                "Entity selection failed. Please try again with a different "
                "question."
            )
        conversation.reset()
        success2 = self._select_relationships(conversation=conversation)
        if not success2:
            raise ValueError(
                "Relationship selection failed. Please try again with a "
                "different question."
            )
        conversation.reset()
        success3 = self._select_properties(conversation=conversation)
        if not success3:
            raise ValueError(
                "Property selection failed. Please try again with a different "
                "question."
            )

    def _generate_query_prompt(
        self,
        entities: list,
        relationships: dict,
        properties: dict,
        query_language: Optional[str] = "Cypher",
    ) -> str:
        """
        Generate a prompt for a large language model to generate a database
        query based on the selected entities, relationships, and properties.

        Args:
            entities: A list of entities that are relevant to the question.

            relationships: A list of relationships that are relevant to the
                question.

            properties: A dictionary of properties that are relevant to the
                question.

            query_language: The language of the query to generate.

        Returns:
            A prompt for a large language model to generate a database query.
        """
        msg = (
            f"Generate a database query in {query_language} that answers "
            f"the user's question. "
            f"You can use the following entities: {entities}, "
            f"relationships: {list(relationships.keys())}, and "
            f"properties: {properties}. "
        )

        for relationship, values in relationships.items():
            self._expand_pairs(relationship, values)

        if self.rel_directions:
            msg += "Given the following valid combinations of source, relationship, and target: "
            for key, value in self.rel_directions.items():
                for pair in value:
                    msg += f"'(:{pair[0]})-(:{key})->(:{pair[1]})', "
            msg += f"generate a {query_language} query using one of these combinations. "

        msg += "Only return the query, without any additional text, symbols or characters --- just the query statement."
        return msg

    def generate_query_prompt(
        self, question: str, query_language: Optional[str] = "Cypher"
    ) -> str:
        """
        Generate a prompt for a large language model to generate a database
        query based on the user's question and class attributes informing about
        the schema.

        Args:
            question: A user's question.

            query_language: The language of the query to generate.

        Returns:
            A prompt for a large language model to generate a database query.
        """
        self._select_graph_entities_from_question(
            question, self.conversation_factory()
        )
        msg = self._generate_query_prompt(
            self.selected_entities,
            self.selected_relationship_labels,
            self.selected_properties,
            query_language,
        )
        return msg

    def generate_query(
        self, question: str, query_language: Optional[str] = "Cypher"
    ) -> str:
        """
        Wrap entity and property selection and query generation; return the
        generated query.

        Args:
            question: A user's question.

            query_language: The language of the query to generate.

        Returns:
            A database query that could answer the user's question.
        """

        self._select_graph_entities_from_question(
            question, self.conversation_factory()
        )

        return self._generate_query(
            question=question,
            entities=self.selected_entities,
            relationships=self.selected_relationship_labels,
            properties=self.selected_properties,
            query_language=query_language,
            conversation=self.conversation_factory(),
        )

    def _get_conversation(
        self, model_name: Optional[str] = None
    ) -> "Conversation":
        """
        Create a conversation object given a model name.

        Args:
            model_name: The name of the model to use for the conversation.

        Returns:
            A BioChatter Conversation object for connecting to the LLM.

        Todo:
            Genericise to models outside of OpenAI.
        """

        conversation = GptConversation(
            model_name=model_name or self.model_name,
            prompts={},
            correct=False,
        )
        conversation.set_api_key(
            api_key=os.getenv("OPENAI_API_KEY"), user="test_user"
        )
        return conversation

    def _select_entities(
        self, question: str, conversation: "Conversation"
    ) -> bool:
        """

        Given a question, select the entities that are relevant to the question
        and store them in `selected_entities` and `selected_relationships`. Use
        LLM conversation to do this.

        Args:
            question: A user's question.

            conversation: A BioChatter Conversation object for connecting to the
                LLM.

        Returns:
            True if at least one entity was selected, False otherwise.

        """

        self.question = question

        conversation.append_system_message(
            (
                "You have access to a knowledge graph that contains "
                f"these entity types: {', '.join(self.entities)}. Your task is "
                "to select the entity types that are relevant to the user's question "
                "for subsequent use in a query. Only return the entity types, "
                "comma-separated, without any additional text. Do not return "
                "entity names, relationships, or properties."
            )
        )

        msg, token_usage, correction = conversation.query(question)

        result = msg.split(",") if msg else []
        # TODO: do we go back and retry if no entities were selected? or ask for
        # a reason? offer visual selection of entities and relationships by the
        # user?

        if result:
            for entity in result:
                entity = entity.strip()
                if entity in self.entities:
                    self.selected_entities.append(entity)

        return bool(result)

    def _select_relationships(self, conversation: "Conversation") -> bool:
        """
        Given a question and the preselected entities, select relationships for
        the query.

        Args:
            conversation: A BioChatter Conversation object for connecting to the
                LLM.

        Returns:
            True if at least one relationship was selected, False otherwise.

        Todo:
            Now we have the problem that we discard all relationships that do
            not have a source and target, if at least one relationship has a
            source and target. At least communicate this all-or-nothing
            behaviour to the user.
        """

        if not self.question:
            raise ValueError(
                "No question found. Please make sure to run entity selection "
                "first."
            )

        if not self.selected_entities:
            raise ValueError(
                "No entities found. Please run the entity selection step first."
            )

        rels = {}
        source_and_target_present = False
        for key, value in self.relationships.items():
            if "source" in value and "target" in value:
                # if source or target is a list, expand to single pairs
                source = ensure_iterable(value["source"])
                target = ensure_iterable(value["target"])
                pairs = []
                for s in source:
                    for t in target:
                        pairs.append(
                            (
                                sentencecase_to_pascalcase(s),
                                sentencecase_to_pascalcase(t),
                            )
                        )
                rels[key] = pairs
                source_and_target_present = True
            else:
                rels[key] = {}

        # prioritise relationships that have source and target, and discard
        # relationships that do not have both source and target, if at least one
        # relationship has both source and target. keep relationships that have
        # either source or target, if none of the relationships have both source
        # and target.

        if source_and_target_present:
            # First, separate the relationships into two groups: those with both
            # source and target in the selected entities, and those with either
            # source or target but not both.

            rels_with_both = {}
            rels_with_either = {}
            for key, value in rels.items():
                for pair in value:
                    if pair[0] in self.selected_entities:
                        if pair[1] in self.selected_entities:
                            rels_with_both[key] = value
                        else:
                            rels_with_either[key] = value
                    elif pair[1] in self.selected_entities:
                        rels_with_either[key] = value

            # If there are any relationships with both source and target,
            # discard the others.

            if rels_with_both:
                rels = rels_with_both
            else:
                rels = rels_with_either

            selected_rels = []
            for key, value in rels.items():
                if not value:
                    continue

                for pair in value:
                    if (
                        pair[0] in self.selected_entities
                        or pair[1] in self.selected_entities
                    ):
                        selected_rels.append((key, pair))

            rels = json.dumps(selected_rels)
        else:
            rels = json.dumps(self.relationships)

        msg = (
            "You have access to a knowledge graph that contains "
            f"these entities: {', '.join(self.selected_entities)}. "
            "Your task is to select the relationships that are relevant "
            "to the user's question for subsequent use in a query. Only "
            "return the relationships without their sources or targets, "
            "comma-separated, and without any additional text. Here are the "
            "possible relationships and their source and target entities: "
            f"{rels}."
        )

        conversation.append_system_message(msg)

        res, token_usage, correction = conversation.query(self.question)

        result = res.split(",") if msg else []

        if result:
            for relationship in result:
                relationship = relationship.strip()
                if relationship in self.relationships:
                    self.selected_relationships.append(relationship)
                    rel_dict = self.relationships[relationship]
                    label = rel_dict.get("label_as_edge", relationship)
                    if "source" in rel_dict and "target" in rel_dict:
                        self.selected_relationship_labels[label] = {
                            "source": rel_dict["source"],
                            "target": rel_dict["target"],
                        }
                    else:
                        self.selected_relationship_labels[label] = {
                            "source": None,
                            "target": None,
                        }

        # if we selected relationships that have either source or target which
        # is not in the selected entities, we add those entities to the selected
        # entities.

        if self.selected_relationship_labels:
            for key, value in self.selected_relationship_labels.items():
                sources = ensure_iterable(value["source"])
                targets = ensure_iterable(value["target"])
                for source in sources:
                    if source is None:
                        continue
                    if source not in self.selected_entities:
                        self.selected_entities.append(
                            sentencecase_to_pascalcase(source)
                        )
                for target in targets:
                    if target is None:
                        continue
                    if target not in self.selected_entities:
                        self.selected_entities.append(
                            sentencecase_to_pascalcase(target)
                        )

        return bool(result)

    @staticmethod
    def _validate_json_str(json_str: str):
        json_str = json_str.strip()
        if json_str.startswith("```json"):
            json_str = json_str[7:]
        if json_str.endswith("```"):
            json_str = json_str[:-3]
        return json_str.strip()

    def _select_properties(self, conversation: "Conversation") -> bool:
        """

        Given a question (optionally provided, but in the standard use case
        reused from the entity selection step) and the selected entities, select
        the properties that are relevant to the question and store them in
        the dictionary `selected_properties`.

        Returns:
            True if at least one property was selected, False otherwise.

        """

        if not self.question:
            raise ValueError(
                "No question found. Please make sure to run entity and "
                "relationship selection first."
            )

        if not self.selected_entities and not self.selected_relationships:
            raise ValueError(
                "No entities or relationships provided, and none available "
                "from entity selection step. Please provide "
                "entities/relationships or run the entity selection "
                "(`select_entities()`) step first."
            )

        e_props = {}
        for entity in self.selected_entities:
            if self.entities[entity].get("properties"):
                e_props[entity] = list(
                    self.entities[entity]["properties"].keys()
                )

        r_props = {}
        for relationship in self.selected_relationships:
            if self.relationships[relationship].get("properties"):
                r_props[relationship] = list(
                    self.relationships[relationship]["properties"].keys()
                )

        msg = (
            "You have access to a knowledge graph that contains entities and "
            "relationships. They have the following properties. Entities:"
            f"{e_props}, Relationships: {r_props}. "
            "Your task is to select the properties that are relevant to the "
            "user's question for subsequent use in a query. Only return the "
            "entities and relationships with their relevant properties in compact "
            "JSON format, without any additional text. Return the "
            "entities/relationships as top-level dictionary keys, and their "
            "properties as dictionary values. "
            "Do not return properties that are not relevant to the question."
        )

        conversation.append_system_message(msg)

        msg, token_usage, correction = conversation.query(self.question)
        msg = BioCypherPromptEngine._validate_json_str(msg)

        try:
            self.selected_properties = json.loads(msg) if msg else {}
        except json.decoder.JSONDecodeError:
            self.selected_properties = {}

        return bool(self.selected_properties)

    def _generate_query(
        self,
        question: str,
        entities: list,
        relationships: dict,
        properties: dict,
        query_language: str,
        conversation: "Conversation",
    ) -> str:
        """
        Generate a query in the specified query language that answers the user's
        question.

        Args:
            question: A user's question.

            entities: A list of entities that are relevant to the question.

            relationships: A list of relationships that are relevant to the
                question.

            properties: A dictionary of properties that are relevant to the
                question.

            query_language: The language of the query to generate.

            conversation: A BioChatter Conversation object for connecting to the
                LLM.

        Returns:
            A database query that could answer the user's question.
        """
        msg = self._generate_query_prompt(
            entities,
            relationships,
            properties,
            query_language,
        )

        conversation.append_system_message(msg)

        out_msg, token_usage, correction = conversation.query(question)

        return out_msg.strip()

    def _expand_pairs(self, relationship, values) -> None:
        if not self.rel_directions.get(relationship):
            self.rel_directions[relationship] = []
        if isinstance(values["source"], list):
            for source in values["source"]:
                if isinstance(values["target"], list):
                    for target in values["target"]:
                        self.rel_directions[relationship].append(
                            (source, target)
                        )
                else:
                    self.rel_directions[relationship].append(
                        (source, values["target"])
                    )
        elif isinstance(values["target"], list):
            for target in values["target"]:
                self.rel_directions[relationship].append(
                    (values["source"], target)
                )
        else:
            self.rel_directions[relationship].append(
                (values["source"], values["target"])
            )

__init__(schema_config_or_info_path=None, schema_config_or_info_dict=None, model_name='gpt-3.5-turbo', conversation_factory=None)

Given a biocypher schema configuration, extract the entities and relationships, and for each extract their mode of representation (node or edge), properties, and identifier namespace. Using these data, allow the generation of prompts for a large language model, informing it of the schema constituents and their properties, to enable the parameterisation of function calls to a knowledge graph.

Parameters:

Name Type Description Default
schema_config_or_info_path Optional[str]

Path to a biocypher schema configuration file or the extended schema information output generated by BioCypher's write_schema_info function (preferred).

None
schema_config_or_info_dict Optional[dict]

A dictionary containing the schema configuration file or the extended schema information output generated by BioCypher's write_schema_info function (preferred).

None
model_name str

The name of the model to use for the conversation. DEPRECATED: This should now be set in the conversation factory.

'gpt-3.5-turbo'
conversation_factory Optional[Callable]

A function used to create a conversation for creating the KG query. If not provided, a default function is used (creating an OpenAI conversation with the specified model, see _get_conversation).

None
Source code in biochatter/prompts.py
def __init__(
    self,
    schema_config_or_info_path: Optional[str] = None,
    schema_config_or_info_dict: Optional[dict] = None,
    model_name: str = "gpt-3.5-turbo",
    conversation_factory: Optional[Callable] = None,
) -> None:
    """

    Given a biocypher schema configuration, extract the entities and
    relationships, and for each extract their mode of representation (node
    or edge), properties, and identifier namespace. Using these data, allow
    the generation of prompts for a large language model, informing it of
    the schema constituents and their properties, to enable the
    parameterisation of function calls to a knowledge graph.

    Args:
        schema_config_or_info_path: Path to a biocypher schema configuration
            file or the extended schema information output generated by
            BioCypher's `write_schema_info` function (preferred).

        schema_config_or_info_dict: A dictionary containing the schema
            configuration file or the extended schema information output
            generated by BioCypher's `write_schema_info` function
            (preferred).

        model_name: The name of the model to use for the conversation.
            DEPRECATED: This should now be set in the conversation factory.

        conversation_factory: A function used to create a conversation for
            creating the KG query. If not provided, a default function is
            used (creating an OpenAI conversation with the specified model,
            see `_get_conversation`).
    """

    if not schema_config_or_info_path and not schema_config_or_info_dict:
        raise ValueError(
            "Please provide the schema configuration or schema info as a "
            "path to a file or as a dictionary."
        )

    if schema_config_or_info_path and schema_config_or_info_dict:
        raise ValueError(
            "Please provide the schema configuration or schema info as a "
            "path to a file or as a dictionary, not both."
        )

    # set conversation factory or use default
    self.conversation_factory = (
        conversation_factory
        if conversation_factory is not None
        else self._get_conversation
    )

    if schema_config_or_info_path:
        # read the schema configuration
        with open(schema_config_or_info_path, "r") as f:
            schema_config = yaml.safe_load(f)
    elif schema_config_or_info_dict:
        schema_config = schema_config_or_info_dict

    # check whether it is the original schema config or the output of
    # biocypher info
    is_schema_info = schema_config.get("is_schema_info", False)

    # extract the entities and relationships: each top level key that has
    # a 'represented_as' key
    self.entities = {}
    self.relationships = {}
    if not is_schema_info:
        for key, value in schema_config.items():
            # hacky, better with biocypher output
            name_indicates_relationship = (
                "interaction" in key.lower() or "association" in key.lower()
            )
            if "represented_as" in value:
                if (
                    value["represented_as"] == "node"
                    and not name_indicates_relationship
                ):
                    self.entities[sentencecase_to_pascalcase(key)] = value
                elif (
                    value["represented_as"] == "node"
                    and name_indicates_relationship
                ):
                    self.relationships[sentencecase_to_pascalcase(key)] = (
                        value
                    )
                elif value["represented_as"] == "edge":
                    self.relationships[sentencecase_to_pascalcase(key)] = (
                        value
                    )
    else:
        for key, value in schema_config.items():
            if not isinstance(value, dict):
                continue
            if value.get("present_in_knowledge_graph", None) == False:
                continue
            if value.get("is_relationship", None) == False:
                self.entities[sentencecase_to_pascalcase(key)] = value
            elif value.get("is_relationship", None) == True:
                value = self._capitalise_source_and_target(value)
                self.relationships[sentencecase_to_pascalcase(key)] = value

    self.question = ""
    self.selected_entities = []
    self.selected_relationships = []  # used in property selection
    self.selected_relationship_labels = {}  # copy to deal with labels that
    # are not the same as the relationship name, used in query generation
    # dictionary to also include source and target types
    self.rel_directions = {}
    self.model_name = model_name

generate_query(question, query_language='Cypher')

Wrap entity and property selection and query generation; return the generated query.

Parameters:

Name Type Description Default
question str

A user's question.

required
query_language Optional[str]

The language of the query to generate.

'Cypher'

Returns:

Type Description
str

A database query that could answer the user's question.

Source code in biochatter/prompts.py
def generate_query(
    self, question: str, query_language: Optional[str] = "Cypher"
) -> str:
    """
    Wrap entity and property selection and query generation; return the
    generated query.

    Args:
        question: A user's question.

        query_language: The language of the query to generate.

    Returns:
        A database query that could answer the user's question.
    """

    self._select_graph_entities_from_question(
        question, self.conversation_factory()
    )

    return self._generate_query(
        question=question,
        entities=self.selected_entities,
        relationships=self.selected_relationship_labels,
        properties=self.selected_properties,
        query_language=query_language,
        conversation=self.conversation_factory(),
    )

generate_query_prompt(question, query_language='Cypher')

Generate a prompt for a large language model to generate a database query based on the user's question and class attributes informing about the schema.

Parameters:

Name Type Description Default
question str

A user's question.

required
query_language Optional[str]

The language of the query to generate.

'Cypher'

Returns:

Type Description
str

A prompt for a large language model to generate a database query.

Source code in biochatter/prompts.py
def generate_query_prompt(
    self, question: str, query_language: Optional[str] = "Cypher"
) -> str:
    """
    Generate a prompt for a large language model to generate a database
    query based on the user's question and class attributes informing about
    the schema.

    Args:
        question: A user's question.

        query_language: The language of the query to generate.

    Returns:
        A prompt for a large language model to generate a database query.
    """
    self._select_graph_entities_from_question(
        question, self.conversation_factory()
    )
    msg = self._generate_query_prompt(
        self.selected_entities,
        self.selected_relationship_labels,
        self.selected_properties,
        query_language,
    )
    return msg

Execution of prompts against the database

DatabaseAgent

Source code in biochatter/database_agent.py
class DatabaseAgent:
    def __init__(
        self,
        model_name: str,
        connection_args: dict,
        schema_config_or_info_dict: dict,
        conversation_factory: Callable,
        use_reflexion: bool,
    ) -> None:
        """
        Create a DatabaseAgent analogous to the VectorDatabaseAgentMilvus class,
        which can return results from a database using a query engine. Currently
        limited to Neo4j for development.

        Args:
            connection_args (dict): A dictionary of arguments to connect to the
                database. Contains database name, URI, user, and password.

            conversation_factory (Callable): A function to create a conversation
                for creating the KG query.

            use_reflexion (bool): Whether to use the ReflexionAgent to generate
                the query.
        """
        self.conversation_factory = conversation_factory
        self.prompt_engine = BioCypherPromptEngine(
            model_name=model_name,
            schema_config_or_info_dict=schema_config_or_info_dict,
            conversation_factory=conversation_factory,
        )
        self.connection_args = connection_args
        self.driver = None
        self.use_reflexion = use_reflexion

    def connect(self) -> None:
        """
        Connect to the database and authenticate.
        """
        db_name = self.connection_args.get("db_name")
        uri = f"{self.connection_args.get('host')}:{self.connection_args.get('port')}"
        uri = uri if uri.startswith("bolt://") else "bolt://" + uri
        user = self.connection_args.get("user")
        password = self.connection_args.get("password")
        self.driver = nu.Driver(
            db_name=db_name or "neo4j",
            db_uri=uri,
            user=user,
            password=password,
        )

    def is_connected(self) -> bool:
        return not self.driver is None

    def _generate_query(self, query: str):
        if self.use_reflexion:
            agent = KGQueryReflexionAgent(
                self.conversation_factory,
                self.connection_args,
            )
            query_prompt = self.prompt_engine.generate_query_prompt(query)
            agent_result = agent.execute(query, query_prompt)
            tool_result = (
                [agent_result.tool_result]
                if agent_result.tool_result is not None
                else None
            )
            return agent_result.answer, tool_result
        else:
            query = self.prompt_engine.generate_query(query)
            results = self.driver.query(query=query)
            return query, results

    def _build_response(
        self,
        results: List[Dict],
        cypher_query: str,
        results_num: Optional[int] = 3,
    ) -> List[Document]:
        if len(results) == 0:
            return [
                Document(
                    page_content=(
                        "I didn't find any result in knowledge graph, "
                        f"but here is the query I used: {cypher_query}. "
                        "You can ask user to refine the question. "
                        "Note: please ensure to include the query in a code "
                        "block in your response so that the user can refine "
                        "their question effectively."
                    ),
                    metadata={"cypher_query": cypher_query},
                )
            ]

        clipped_results = results[:results_num] if results_num > 0 else results
        results_dump = json.dumps(clipped_results)

        return [
            Document(
                page_content=(
                    "The results retrieved from knowledge graph are: "
                    f"{results_dump}. "
                    f"The query used is: {cypher_query}. "
                    "Note: please ensure to include the query in a code block "
                    "in your response so that the user can refine "
                    "their question effectively."
                ),
                metadata={"cypher_query": cypher_query},
            )
        ]

    def get_query_results(self, query: str, k: int = 3) -> list[Document]:
        """
        Generate a query using the prompt engine and return the results.
        Replicates vector database similarity search API. Results are returned
        as a list of Document objects to align with the vector database agent.

        Args:
            query (str): A query string.

            k (int): The number of results to return.

        Returns:
            List[Document]: A list of Document objects. The page content values
                are the literal dictionaries returned by the query, the metadata
                values are the cypher query used to generate the results, for
                now.
        """
        (cypher_query, tool_result) = self._generate_query(
            query
        )  # self.prompt_engine.generate_query(query)
        # TODO some logic if it fails?
        if tool_result is not None:
            # If _generate_query() already returned tool_result, we won't connect
            # to graph database to query result any more
            results = tool_result
        else:
            results = self.driver.query(query=cypher_query)

        # return first k results
        # returned nodes can have any formatting, and can also be empty or fewer
        # than k
        if results is None or len(results) == 0 or results[0] is None:
            return []
        return self._build_response(
            results=results[0], cypher_query=cypher_query, results_num=k
        )

    def get_description(self):
        result = self.driver.query("MATCH (n:Schema_info) RETURN n LIMIT 1")

        if result[0]:
            schema_info_node = result[0][0]["n"]
            schema_dict_content = schema_info_node["schema_info"][
                :MAX_AGENT_DESC_LENGTH
            ]  # limit to 1000 characters
            return (
                f"the graph database contains the following nodes and edges: \n\n"
                f"{schema_dict_content}"
            )

        # schema_info is not found in database
        nodes_query = "MATCH (n) RETURN DISTINCT labels(n) LIMIT 300"
        node_results = self.driver.query(query=nodes_query)
        edges_query = "MATCH (n) RETURN DISTINCT type(n) LIMIT 300"
        edge_results = self.driver.query(query=edges_query)
        desc = (
            f"The graph database contains the following nodes and edges: \n"
            f"nodes: \n{node_results}"
            f"edges: \n{edge_results}"
        )
        return desc[:MAX_AGENT_DESC_LENGTH]

__init__(model_name, connection_args, schema_config_or_info_dict, conversation_factory, use_reflexion)

Create a DatabaseAgent analogous to the VectorDatabaseAgentMilvus class, which can return results from a database using a query engine. Currently limited to Neo4j for development.

Parameters:

Name Type Description Default
connection_args dict

A dictionary of arguments to connect to the database. Contains database name, URI, user, and password.

required
conversation_factory Callable

A function to create a conversation for creating the KG query.

required
use_reflexion bool

Whether to use the ReflexionAgent to generate the query.

required
Source code in biochatter/database_agent.py
def __init__(
    self,
    model_name: str,
    connection_args: dict,
    schema_config_or_info_dict: dict,
    conversation_factory: Callable,
    use_reflexion: bool,
) -> None:
    """
    Create a DatabaseAgent analogous to the VectorDatabaseAgentMilvus class,
    which can return results from a database using a query engine. Currently
    limited to Neo4j for development.

    Args:
        connection_args (dict): A dictionary of arguments to connect to the
            database. Contains database name, URI, user, and password.

        conversation_factory (Callable): A function to create a conversation
            for creating the KG query.

        use_reflexion (bool): Whether to use the ReflexionAgent to generate
            the query.
    """
    self.conversation_factory = conversation_factory
    self.prompt_engine = BioCypherPromptEngine(
        model_name=model_name,
        schema_config_or_info_dict=schema_config_or_info_dict,
        conversation_factory=conversation_factory,
    )
    self.connection_args = connection_args
    self.driver = None
    self.use_reflexion = use_reflexion

connect()

Connect to the database and authenticate.

Source code in biochatter/database_agent.py
def connect(self) -> None:
    """
    Connect to the database and authenticate.
    """
    db_name = self.connection_args.get("db_name")
    uri = f"{self.connection_args.get('host')}:{self.connection_args.get('port')}"
    uri = uri if uri.startswith("bolt://") else "bolt://" + uri
    user = self.connection_args.get("user")
    password = self.connection_args.get("password")
    self.driver = nu.Driver(
        db_name=db_name or "neo4j",
        db_uri=uri,
        user=user,
        password=password,
    )

get_query_results(query, k=3)

Generate a query using the prompt engine and return the results. Replicates vector database similarity search API. Results are returned as a list of Document objects to align with the vector database agent.

Parameters:

Name Type Description Default
query str

A query string.

required
k int

The number of results to return.

3

Returns:

Type Description
list[Document]

List[Document]: A list of Document objects. The page content values are the literal dictionaries returned by the query, the metadata values are the cypher query used to generate the results, for now.

Source code in biochatter/database_agent.py
def get_query_results(self, query: str, k: int = 3) -> list[Document]:
    """
    Generate a query using the prompt engine and return the results.
    Replicates vector database similarity search API. Results are returned
    as a list of Document objects to align with the vector database agent.

    Args:
        query (str): A query string.

        k (int): The number of results to return.

    Returns:
        List[Document]: A list of Document objects. The page content values
            are the literal dictionaries returned by the query, the metadata
            values are the cypher query used to generate the results, for
            now.
    """
    (cypher_query, tool_result) = self._generate_query(
        query
    )  # self.prompt_engine.generate_query(query)
    # TODO some logic if it fails?
    if tool_result is not None:
        # If _generate_query() already returned tool_result, we won't connect
        # to graph database to query result any more
        results = tool_result
    else:
        results = self.driver.query(query=cypher_query)

    # return first k results
    # returned nodes can have any formatting, and can also be empty or fewer
    # than k
    if results is None or len(results) == 0 or results[0] is None:
        return []
    return self._build_response(
        results=results[0], cypher_query=cypher_query, results_num=k
    )