命名实体识别以丰富文本

2023 年 10 月 20 日
在 Github 中打开

命名实体识别 (NER) 是一项 自然语言处理 任务,用于识别和分类命名实体 (NE) 到预定义的语义类别中(例如人物、组织、地点、事件、时间表达和数量)。通过将原始文本转换为结构化信息,NER 使数据更具可操作性,从而促进信息提取、数据聚合、分析和社交媒体监控等任务。

本笔记本演示了如何使用聊天完成函数调用执行 NER,以使用维基百科等知识库的链接来丰富文本

文本

在德国,1440 年,金匠约翰内斯·古腾堡发明了活字印刷机。他的工作引发了一场信息革命,并在整个欧洲以前所未有的规模传播了文学作品。文艺复兴时期的单座活字印刷机以现有的螺旋压力机的设计为蓝本,每天最多可生产 3,600 页。

使用维基百科链接丰富的文本

德国,1440 年,金匠约翰内斯·古腾堡发明了活字印刷机。他的工作引发了一场信息革命,并在整个欧洲以前所未有的规模传播了文学作品。文艺复兴时期的单座文艺复兴活字印刷机以现有的螺旋压力机的设计为蓝本,每天最多可生产 3,600 页。

推断成本: 该笔记本还说明了如何估算 OpenAI API 成本。

%pip install --upgrade openai --quiet
%pip install --upgrade nlpia2-wikipedia --quiet
%pip install --upgrade tenacity --quiet
Note: you may need to restart the kernel to use updated packages.
Note: you may need to restart the kernel to use updated packages.
Note: you may need to restart the kernel to use updated packages.

此笔记本适用于最新的 OpenAI 模型 gpt-3.5-turbo-0613gpt-4-0613

import json
import logging
import os

import openai
import wikipedia

from typing import Optional
from IPython.display import display, Markdown
from tenacity import retry, wait_random_exponential, stop_after_attempt

logging.basicConfig(level=logging.INFO, format=' %(asctime)s - %(levelname)s - %(message)s')

OPENAI_MODEL = 'gpt-3.5-turbo-0613'

client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "<your OpenAI API key if not set as env var>"))

我们定义了一组标准的 NER 标签来展示广泛的用例。但是,对于我们使用知识库链接丰富文本的特定任务,实际上只需要一个子集。

labels = [
    "person",      # people, including fictional characters
    "fac",         # buildings, airports, highways, bridges
    "org",         # organizations, companies, agencies, institutions
    "gpe",         # geopolitical entities like countries, cities, states
    "loc",         # non-gpe locations
    "product",     # vehicles, foods, appareal, appliances, software, toys 
    "event",       # named sports, scientific milestones, historical events
    "work_of_art", # titles of books, songs, movies
    "law",         # named laws, acts, or legislations
    "language",    # any named language
    "date",        # absolute or relative dates or periods
    "time",        # time units smaller than a day
    "percent",     # percentage (e.g., "twenty percent", "18%")
    "money",       # monetary values, including unit
    "quantity",    # measurements, e.g., weight or distance
]

聊天完成 API 接受消息列表作为输入,并传递模型生成的消息作为输出。虽然聊天格式主要设计用于促进多轮对话,但它对于没有任何先前对话的单轮任务也同样有效。为了我们的目的,我们将为系统、助手和用户角色指定消息。

系统消息(提示)通过定义助手期望的角色和任务来设置助手的行为。我们还描述了我们旨在识别的特定实体标签集。

虽然可以指示模型格式化其响应,但必须注意的是,gpt-3.5-turbo-0613gpt-4-0613 都经过微调,可以辨别何时应调用函数,并根据函数的签名回复 JSON 格式。此功能简化了我们的提示,并使我们能够直接从模型接收结构化数据。

def system_message(labels):
    return f"""
You are an expert in Natural Language Processing. Your task is to identify common Named Entities (NER) in a given text.
The possible common Named Entities (NER) types are exclusively: ({", ".join(labels)})."""

助手消息通常存储先前的助手响应。但是,在我们的场景中,它们也可以被精心设计以提供所需行为的示例。虽然 OpenAI 能够执行零样本命名实体识别,但我们发现单样本方法产生更精确的结果。

def assisstant_message():
    return f"""
EXAMPLE:
    Text: 'In Germany, in 1440, goldsmith Johannes Gutenberg invented the movable-type printing press. His work led to an information revolution and the unprecedented mass-spread / 
    of literature throughout Europe. Modelled on the design of the existing screw presses, a single Renaissance movable-type printing press could produce up to 3,600 pages per workday.'
    {{
        "gpe": ["Germany", "Europe"],
        "date": ["1440"],
        "person": ["Johannes Gutenberg"],
        "product": ["movable-type printing press"],
        "event": ["Renaissance"],
        "quantity": ["3,600 pages"],
        "time": ["workday"]
    }}
--"""

用户消息为助手任务提供特定的文本

def user_message(text):
    return f"""
TASK:
    Text: {text}
"""

在 OpenAI API 调用中,我们可以描述 gpt-3.5-turbo-0613gpt-4-0613函数,并让模型智能地选择输出一个 JSON 对象,其中包含调用这些函数的参数。重要的是要注意,聊天完成 API 实际上并不执行函数。相反,它提供 JSON 输出,然后可以使用该输出在我们的代码中调用函数。有关更多详细信息,请参阅OpenAI 函数调用指南

我们的函数 enrich_entities(text, label_entities) 获取文本块和一个字典,其中包含已识别的标签和实体作为参数。然后,它将识别的实体与其对应的维基百科文章的链接相关联。

@retry(wait=wait_random_exponential(min=1, max=10), stop=stop_after_attempt(5))
def find_link(entity: str) -> Optional[str]:
    """
    Finds a Wikipedia link for a given entity.
    """
    try:
        titles = wikipedia.search(entity)
        if titles:
            # naively consider the first result as the best
            page = wikipedia.page(titles[0])
            return page.url
    except (wikipedia.exceptions.WikipediaException) as ex:
        logging.error(f'Error occurred while searching for Wikipedia link for entity {entity}: {str(ex)}')

    return None
def find_all_links(label_entities:dict) -> dict:
    """ 
    Finds all Wikipedia links for the dictionary entities in the whitelist label list.
    """
    whitelist = ['event', 'gpe', 'org', 'person', 'product', 'work_of_art']
    
    return {e: find_link(e) for label, entities in label_entities.items() 
                            for e in entities
                            if label in whitelist}
def enrich_entities(text: str, label_entities: dict) -> str:
    """
    Enriches text with knowledge base links.
    """
    entity_link_dict = find_all_links(label_entities)
    logging.info(f"entity_link_dict: {entity_link_dict}")
    
    for entity, link in entity_link_dict.items():
        text = text.replace(entity, f"[{entity}]({link})")

    return text

如前所述,gpt-3.5-turbo-0613gpt-4-0613 已经过微调,可以检测何时应调用函数。此外,它们可以生成符合函数签名的 JSON 响应。以下是我们遵循的步骤

  1. 定义我们的函数及其关联的 JSON 模式。
  2. 使用 messagestoolstool_choice 参数调用模型。
  3. 将输出转换为 JSON 对象,然后使用模型提供的 arguments 调用函数

在实践中,人们可能希望通过将函数响应作为新消息附加来再次调用模型,并让模型将结果总结回用户。然而,为了我们的目的,不需要此步骤。

请注意,在实际场景中,强烈建议在采取操作之前构建用户确认流程。

由于我们希望模型输出标签和识别实体的字典

{   
    "gpe": ["Germany", "Europe"],   
    "date": ["1440"],   
    "person": ["Johannes Gutenberg"],   
    "product": ["movable-type printing press"],   
    "event": ["Renaissance"],   
    "quantity": ["3,600 pages"],   
    "time": ["workday"]   
}   

我们需要定义相应的 JSON 模式,以便传递给 tools 参数

def generate_functions(labels: dict) -> list:
    return [
        {   
            "type": "function",
            "function": {
                "name": "enrich_entities",
                "description": "Enrich Text with Knowledge Base Links",
                "parameters": {
                    "type": "object",
                        "properties": {
                            "r'^(?:' + '|'.join({labels}) + ')$'": 
                            {
                                "type": "array",
                                "items": {
                                    "type": "string"
                                }
                            }
                        },
                        "additionalProperties": False
                },
            }
        }
    ]

现在,我们调用模型。重要的是要注意,我们通过将 tool_choice 参数设置为 {"type": "function", "function" : {"name": "enrich_entities"}} 来指示 API 使用特定函数。

@retry(wait=wait_random_exponential(min=1, max=10), stop=stop_after_attempt(5))
def run_openai_task(labels, text):
    messages = [
          {"role": "system", "content": system_message(labels=labels)},
          {"role": "assistant", "content": assisstant_message()},
          {"role": "user", "content": user_message(text=text)}
      ]

    # TODO: functions and function_call are deprecated, need to be updated
    # See: https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools
    response = openai.chat.completions.create(
        model="gpt-3.5-turbo-0613",
        messages=messages,
        tools=generate_functions(labels),
        tool_choice={"type": "function", "function" : {"name": "enrich_entities"}}, 
        temperature=0,
        frequency_penalty=0,
        presence_penalty=0,
    )

    response_message = response.choices[0].message
    
    available_functions = {"enrich_entities": enrich_entities}  
    function_name = response_message.tool_calls[0].function.name
    
    function_to_call = available_functions[function_name]
    logging.info(f"function_to_call: {function_to_call}")

    function_args = json.loads(response_message.tool_calls[0].function.arguments)
    logging.info(f"function_args: {function_args}")

    function_response = function_to_call(text, function_args)

    return {"model_response": response, 
            "function_response": function_response}
text = """The Beatles were an English rock band formed in Liverpool in 1960, comprising John Lennon, Paul McCartney, George Harrison, and Ringo Starr."""
result = run_openai_task(labels, text)
 2023-10-20 18:05:51,729 - INFO - function_to_call: <function enrich_entities at 0x0000021D30C462A0>
 2023-10-20 18:05:51,730 - INFO - function_args: {'person': ['John Lennon', 'Paul McCartney', 'George Harrison', 'Ringo Starr'], 'org': ['The Beatles'], 'gpe': ['Liverpool'], 'date': ['1960']}
 2023-10-20 18:06:09,858 - INFO - entity_link_dict: {'John Lennon': 'https://en.wikipedia.org/wiki/John_Lennon', 'Paul McCartney': 'https://en.wikipedia.org/wiki/Paul_McCartney', 'George Harrison': 'https://en.wikipedia.org/wiki/George_Harrison', 'Ringo Starr': 'https://en.wikipedia.org/wiki/Ringo_Starr', 'The Beatles': 'https://en.wikipedia.org/wiki/The_Beatles', 'Liverpool': 'https://en.wikipedia.org/wiki/Liverpool'}
display(Markdown(f"""**Text:** {text}   
                     **Enriched_Text:** {result['function_response']}"""))
<IPython.core.display.Markdown object>

为了估算推断成本,我们可以解析响应的“usage”字段。每个模型的详细 token 成本可在OpenAI 定价指南中找到

# estimate inference cost assuming gpt-3.5-turbo (4K context)
i_tokens  = result["model_response"].usage.prompt_tokens 
o_tokens = result["model_response"].usage.completion_tokens 

i_cost = (i_tokens / 1000) * 0.0015
o_cost = (o_tokens / 1000) * 0.002

print(f"""Token Usage
    Prompt: {i_tokens} tokens
    Completion: {o_tokens} tokens
    Cost estimation: ${round(i_cost + o_cost, 5)}""")
Token Usage
    Prompt: 331 tokens
    Completion: 47 tokens
    Cost estimation: $0.00059