Food Sterilizer,Food Processing Machinery Parts,Fish Processing Machines

Food Sterilizer,Food Processing Machinery Parts,Fish Processing Machines

The common sense concept spectrum is a kind of knowledge of the entity and entities established around the common sense concept, while the scene of the scene is a kind of knowledge map constructed. This paper introduces the challenges of SCHEMA, map construction built by Meituan’s common sense concept, and algorithm practice in the construction process, and finally introduces some current commonly used concept maps in business.

I. Introduction

In natural language processing, we often think about how to understand the understanding of natural language. For our human beings, understand the text information of a natural language, usually through the current information, related to related information stored in your own brain, and finally understand information. For example, “He doesn’t like to eat apples, but like to eat ice cream”, people associate cognitive information in the brain when understanding: Apple, sweet, taste is a bit crisp; ice cream, sweet, cold, summer Can be eaten; children prefer to eat sweets and ice cream. Therefore, combining such knowledge will reasonably prefer a number of reasons for ice cream. But now many natural language understandings still focus on the level of information, and now understanding works similar to a Bayesian probability, looking for the eligible maximum text information from known training text.

During natural language processing, understanding the text is the ultimate goal of the natural language processing, so now more and more research have introduced some extra knowledge to help the machine to understand the understanding of natural language text. Simple text information is only an expression of external objective facts. Knowledge is the summary of external objective facts on the basis of textual information, so adding assistive knowledge information in natural language processing, allowing natural language to understand better.


The establishment of a knowledge system is a direct way to help natural language understand more accurate. The knowledge map is proposed around this idea, and it is expected to give the machine to make a reason to understand like a person. Therefore, in 2012, Google officially proposed the concept of knowledge graph, its original intention is to optimize the results returned by the search engine, enhance the user’s search quality and experience.

Figure 1 Information and knowledge

Second, the common sense concept map introduction

The common sense concept map is to establish the relationship between concepts and concepts to help natural language text. At the same time, our common sense concept map side is a beautiful group scene, helping to improve the search, recommendation, Feeds stream in the beauty group scene.

According to the needs of understanding, it is mainly the ability to understand the three dimensions:

What is it?

What is the concept is what is the association system that establishes the core concept. For example, “Maintenance Washing Machine”, “Maintenance” is what is “washing machine”.



, The attributes of the core concept, refinement of a concept of core concept. “Restaurant”, “Parent-child Amusement Park”, “Thousand Layer Cake”, “Thousand Layer Cake”, “Thousands of Parents”, “Thousands of Faces”, is the attribute of a certain aspect of the core concept, so it is necessary to establish a core concept. The correlation between the corresponding attributes and attribute values.

What to give

Solve the GAP between the search concept and the concept of the concept, such as “Reading”, “Shop”, “Shuwa”, etc. There is no clear corresponding supply concept, so establish the associated network between search and supply concepts, solve this type of problem. .

Summarize, cover the concept of “What is” the concept of the concept of “what” what is the concept of “what”. At the same time, POINT OF INTERESTING, SPU (STANDARD Product Unit), a group list as an example in the US group scene, requires a connection to the concept in the map.

Figure 2 Common sense concept map relationship

From the construction target, dismantle the construction of the overall common sense concept map, split into three types of nodes and four types of relationships, and the specific content is as follows.


2.1 map three types of nodes

Taxonomy node

: In the concept map, understand a conceptual system that requires reasonable knowledge system, predefined Taxonomy knowledge system as the basis of understanding, divided into two types of nodes in a predefined system: the first class can be used as a core in the US group scene Category, for example, ingredients, projects, venues, and other categories, such as color, methods, styles. The definition of these two types of nodes can help search, recommendations, etc. The predefined Taxonomy node is shown in the following figure:

Figure 3 Map Taxonomy predefined system

Atomic concept node

: Composition of the minimum semantic unit nodes of the map, there are minimal particle sizes of independent semantics, such as net red, dog coffee, facial water, etc. Defined atomic concepts, all need to happen to the defined Taxonomy node.

Composite concept node

: The concept node of the atomic concept and the corresponding attribute combination, such as the face hydrating, facial hydration, and the like. The composite concept needs to establish a top and lower relationship with its corresponding core word concept.

2.2 Four types of relationships

Synonym / upper and lower relationships

: Semantic synonym / upper and lower relationships, such as facial hydration-Syn-facial hydration, etc. The defined Taxonomy system is also a top-down relationship, so it is merged into synonymous / upper and lower relationships.

Figure 4 Upset, synonymous relationship example

Concept property relationship

: It is a typical CPV (Concept-Property-Value) relationship, describes and defines the concept from individual attributes, such as hot pot – taste – not spicy, hot pot – specifications – single, etc., examples:

Figure 5 Concept Attribute Relationship Example

Concept property relationship contains two categories.

Predefined conceptual properties: The currently predefined typical conceptual properties are as follows:

Figure 6 Predefined attributes

Open Concept Attribute: In addition to our own concept of public concept properties, we also dig some specific properties from text to supplement some specific properties. For example, posture, theme, comfort, word of mouth, etc.


Concept contract

This kind of relationship mainly establishes the link between users search concepts and US groups, such as Stepping – Places – Botanical Garden, Decompression – Project – Boxing, etc.

The concept undertake relationship with “event” as the core, defines “venue”, “item”, “crowd”, “time”, “effect”, etc. to meet the needs of user needs. Taking the event “whitening” as an example, “whitening” as the user’s needs, can have different supply concepts, such as beauty salons, water jets, etc. At present, several types of undertakes defined are shown in the following figure:

Figure 7 Concept Type Type

POI / SPU-conceptual relationship

: POI as an example in the beauty group scene, an example-conceptual relationship as the last stop in the knowledge map, often compares the value of knowledge maps in business. In the search, recommended, etc., the ultimate goal is to show the POI that meets the needs of users, so establishing the relationship between the POI / SPU-concept is an important part of the entire US Scene, which is also a more valuable data. .

Third, common sense concept map construction

The integral framework of the map is shown in the following figure:

Figure 8 Conceptual spectrum construction overall work

3.1 concept excavation

The various relationships of common sense concept maps are constructed around the concept, and the mining of these concepts is the first ring of common sense concept maps. According to the two types of atomic concepts and composite concepts, the corresponding methods are taken separately.

3.1.1 Atomic Concept Mining

Atomic concept candidate comes from Query, UGC (user generated content), the minimum fragment of the text of the group or other text, the judgment criterion of the atomic concept is required to meet the three characteristics of popularity, meaning and integrity.


One concept should be a word that is high in one or some tangtuity, which is mainly based on frequency characteristic metrics, such as “desktop kill” search volume is very low and the frequency in the UGC corpse is very low. Meet an epidemic requirement.


One concept should be a meaningful word, which mainly through semantic feature measures, such as “Ax”, “Pog” usually only tables a simple name without other actual meaning.


One concept should be a complete word, which is mainly based on independent retrieval (the word as Query’s search quantity / contains Query’s total search quantity), such as “Children” is a wrong word candidate The frequency is high in the UGC, but the independent retrieval is low.

Based on the characteristics of the atomic concept, the training data training of artificial marking and rule automatic construction is reasonable to determine whether the atomic concept is reasonably judged.

3.1.2 Composite Concept Mining

The composite concept candidate comes from the combination of atomic concepts, due to the combination, the judgment of the composite concept is more complicated than the atomic concept. The composite concept requires a certain approval in the US station while ensuring the complete semantics. According to the type of the problem, the model structure of WIDE & DEEP is used, and the Deep side is responsible for the semantic judgment, and the information in the station is introduced.

Figure 9 Composite concept excavation Wide & Deep Model

The model structure has the following two characteristics, and more accurate judgment of the rationality of the composite concept:

Wide & Deep Model Structure

: Combine the discrete feature and the depth model to determine whether the composite concept is reasonable.


Graph Embedding Features


: Introducing related information between phrase matches, such as “food” can be matched with “crowd”, “cooking method”, “quality”.

3.2 Concept upside relationship excavation


After getting the concept, you also need to understand a concept “what”, on the one hand, by manually defined the upper and lower relationships in the Taxonomy knowledge system, on the other hand, by the upper and lower position relationship between the concepts.

3.2.1 Concept – TAXOONOMY Double Processing

Concept – Inter-TaxOmy’s upper and lower relationship is what is a concept of manually defined knowledge system. Since the TaxOmy type is the type of manual definition, this problem can be transformed into a classification problem. At the same time, a concept may have multiple types in the TaxOmy system, such as “green fish” is both a “animal”, also belonging to “ingredients”, so this will eventually process this problem as an Entity Typing task. The concept and its corresponding context are input as the model, and the different Taxonomy categories is placed in the same space, and the specific model structure is shown below:


Figure 10 BERT TAXOONOMY relationship model

3.2.2 Concept – Conceptual Interior Relationship

The knowledge system understands what the concept is the type of manual definition, but the type of manual definition is always limited. If the last word is not in the type of manual definition, there is no way to understand. If you can understand the “Western Musical Instruments”, “Musical Instruments”, “Two Hu”, “Items”, but there is no way to get “Western Musical Instruments” and “Musical Instruments”, “Two Hu” and “Music” The upper and lower position relationship. Based on the above problems, the following two methods are currently taken in the following problems.

Method based on lexical rules

: Mainly solved the up and down relationship between the atomic concept and the composite concept, using the candidate relationship to excavate the upper and lower relationships on the relationship of the lexical (such as Western Musical Instruments).

Method based on context judgment

: The term rule can solve the judgment of the upper and lower positional relationships containing the relationship on the lexical. For the upper and lower positional relationships of the associated relationship in the absence of words, such as “Erhu-Musical Instruments”, first need to perform the upper and lower positions, and extract the relationship candidate such as “two Hu-Music”, and then perform the upper and lower relationships, judgment ” Erhus – Musical Instrument is a reasonable upper and lower relationship. Considering that people will introduce this object’s type when explaining an object, as explained in the concept of “two Hu”, it will mention “erhu is a traditional musical instrument”, from such interpretative text, both The relationship candidate such as “two Hu-Musical” can be extracted, and this relationship candidate is also reasonable. Here is a candidate relationship description extraction and the upper and lower positional relationship classification.

Candidate relationship description extraction

: Two concepts from the same Taxonomy type are the necessary conditions for the top and lower relationships, such as “two Hu” and “Musical Instruments” are “items” defined in the Taxonomy system, according to the concept – Taxonomy top and lower relationships As a result, for the concept of the upper and lower relationships to be mining, find the candidate concept of the candidate concept consistent with its Taxonomy type, and then screen the candidate relationship descriptor used as the upper and lower relationship classification according to the candidate relationship classification in text .

Upper and lower relationship classification

After obtaining the candidate relationship descriptor, it is necessary to combine whether the context is reasonable to determine whether the upper and lower position relationship is reasonable. Here, two concepts are marked in the starting position and termination position in the article, and the two concepts are in the text The vector splicing of the start position mark is a representation of the relationship. According to this representation of the upper and lower position relationship, the vector represents the result of using the BERT output, the detailed model structure is shown below:

Figure 11 BERT upper and lower position relationship model

In the training data structure, due to the sentences of the above-down relationship, the plus sentences that have a large number of consensus have not clearly indicate whether or not the candidate relationship has the upper and lower position relationships, and uses the above-inographic relationship to take remote supervision. Training data construction. Not feasible, so the model is trained directly using the training set of artificial marks. Since the number of artificial labels is limited, the magnitude is enhanced in the Qianqi, where the Semiper Learning Algorithm UDA (UNSUPERVISED DATA AUGMENTATION) is increasing, the final precision can reach 90% +, the detailed indicator is shown in Table 1:

Table 1 Effects of UDA under different training data

3.3 Concept Attribute Relationship Mining

The concept contains attributes can be divided into public properties and open properties according to whether the property is generally divided into public properties and open properties. Public properties are manually defined, most concepts contain properties, such as price, style, quality, etc. Open property refers to attributes that certain concepts, for example, “hair transplant”, “beautiful eyelashes” and “script kill” contain open attribute “density”, “warpage” and “logic”. The number of open properties is far more than public properties. In response to these two attributes, we use the following two ways to excavate.

3.3.1 Mining Public property Relations Based on Compound Concept

Due to the versatility of public properties, Value in public property (CPV) is usually combined in combination with composite concepts in the form of composite concepts, such as parity shopping malls, Japanese, red movie HDs. We translate the relationship mine task into dependence analysis and fine-grained NER tasks (can refer to “US Mission search in the exploration and practice of NER technology”, depending on the analysis identifying the core entities and modified components in the composite concept, fine-grained NER Judgment A specific attribute value. For example, a given composite concept “Red Movie HD”, dependent analysis identifies the core concept of “movie”, “Red”, “HD” is “Movie” attribute, fine-grained NER predict the attribute value is “style (style” ), “Quality evaluation (HD)”.

Dependent analysis and fine-grained NER have information that can be utilized, such as “graduation doll”, “time)” and “product)” entity type, and “doll” is the dependent information of the core word, can mutual Promote training and jointly learn two tasks. However, because the degree of correlation between the two tasks is not clear, there is a large noise, using meta-LSTM, the combined learning of Feature-Level is optimized to Function-Level, the hard share is dynamic sharing, lowering two Noise effect between tasks.


The overall architecture of the model is as follows:

Figure 12 Dependence Analysis – fine grain NER Joint learning model

At present, the overall accuracy of the concept modification relationship is around 85%.

3.3.2 Mining specific attributes based on open attribute

Open property word and mining of attribute values

Open property Relationships require mining different concepts unique attributes and attribute values, which is difficult to identify the openness and open property values. By observing data, some universal attribute values ​​(eg, good, bad, high, low, less, less), usually and attribute matching (eg, high temperature, high temperature, large traffic). So we take a template-based bootstrapping method automatically excavate attributes and attribute values ​​from user comments, and the mining process is as follows:

Figure 13 Open attribute excavation process

After excavating the open property words and attribute values, the mining splits of open attribute relationships is “concept-attribute” binary group mining and “concept-attribute-attribute value” ternary group excavation.

Concept – mining of attributes


“Concept – Property” binary group mining, that is, the concept of determination concept contains whether the properties Property. The excavation steps are as follows:

According to the conformality features of the concept and attributes in the UGC, the typical attributes corresponding to the TFIDF variant algorithm mining concept are used as candidates.

The candidate concept is constructed as a simple natural expression, using the Tongsheng language model to determine the syndrome of the sentence, and retain the concept attribute of high-rise.

Concept – Property – Property Value

After obtaining the “Concept-Properties” binary group, the steps to excavate the corresponding attribute values ​​are as follows:

Seed excavation

. Tining the seed terment group from the UGC based on the burst characteristics and the language model.

Template mining

. It is an important standard for the swimming pool using a suitable template (for example, “water temperature, using the seed terment group.”). “).

Relationship generation

. The training mask language model is used for the relationship generation using the seed ternary group fill template.

Figure 14 Concept Attribute Relationship Model

At present, the conceptual attribute relationship between open field is around 80%.


3.4 Concept Construction Relationship Mining

The concept consistency is the association between the establishment of the user search concept and the concept of US groups. For example, when the user searches for “walking”, the real intention is to find “Where is the place where the pair is”, the platform is undertaken by the concept of “Country Park”, “Botanical Garden”. The excavation of the relationship needs to be performed from 0 to 1, so the entire conceptual contractual excavation designs different mining algorithms according to the mining of different phases, which can be divided into three phases: 1 early seed mining; 2 medium depth discriminant model mining; 3 post-relational completion. The details are described in detail.

3.4.1 Mining Seed Data Based on Cooperative Features

In order to solve the cold start problem in the relational extraction task, the industry usually uses bootstrapping methods, and automatically expands data from the corners by manually set a small amount of seeds and templates. However, the bootstrapping method is not only limited to the quality of the template, but also has natural defects in the scene of the US group. The main source of the metades of the United States is the user comment, and the user comment is characterized by ten-dimensional and diverse, it is difficult to design universal and effective templates. Therefore, we abandon the template-based approach, but according to the ener-threatening characteristics between the entities, we construct a three-yuan compared learning network, automatically excavate the potential related information between unstructured texts. .


Specifically, we observed that the distribution difference between the entities in different merchants have a large distribution of entities. For example, UGC under the food, often involves “dinner”, “à la carte”, “restaurant”; UGC under fitness category often involves “weight loss”, “private education”, “gym”; and “decoration” General entities such as “Hall” will appear under each category. Therefore, we construct a three-dollar comparison learning network, making similar user comments from category, and different types of purpose user comments are far from. Similar to Word2Vec, the word vector system is similar, and the words obtained by this comparative learning strategy naturally contain rich relationship information. In prediction, for any user search concept, the statistical characteristics of the service can be supplemented by calculating the semantic similarity between all the undertake concepts, and a batch of high quality seed data is obtained.

Figure 15 Concept Triplet NetWork

3.4.2 Depth Model Based on Seed Data Training

The pre-training language model has made great progress in NLP in the NLP field in the past two years, and the downstream task based on large-scale pre-training model is a very popular practice in NLP. Therefore, in the middle of the relationship, we use the BERT-based relationship discriminant model (refer to the “Exploration and Practice of the US Mission BERT”, using the large number of language itself learned by Bert pre-training to help the relationship to extract the task.

The model structure is shown below. First, the candidate entity is obtained, and the candidate entity is obtained, and the user comment containing candidate entities is recalled; then, special sign symbols are inserted in the start position and end position of the two entities along the entity marking method in the MTB paper. After the BERT modeling, the special symbols of the two entities are spliced ​​as a relationship. Finally, the relationship indicates whether the input SoftMax layer determines whether the entity contains relationships.


Figure 16 Concept Undertake Relationship Decision Model

3.4.3 Relationship based on existing map structure

Through the above two stages, a map of the concept of the scale is constructed from the unstructured text information has been constructed. However, due to the limitations of the semantic model, there is a large number of three-group missing in the current map. In order to further enrich the relationship information of conceptual maps, complement, we apply the TRANSE algorithm in the knowledge map link prediction, and the technique of the map neural network, the existing concept map is completed.

In order to make full use of the structural information of known maps, we use the relationship based graphical focus neural network (RGAT, RELATIONAL Graph Attention Network) to build modelax structure information. RGAT uses the relationship attention mechanism to overcome the defects of traditional GCN, GAT unable to build a mold type, and is more suitable for modeling concept maps such heterogeneous networks. After using RGAT to get the entity dense embedding, we use TRANSE as a loss function. TRANSE treats R in the ternate group (h, r, t) as a translation vector from H to T, and conteshes H + R ≈T. This method is widely used in the knowledge map complement task, showing great robustness and expansion.

The specific details are shown in the figure below, and the characteristics of each layer of nodes in the RGAT are made from the mean of neighbor’s node characteristics and the average weight of the neighboring features, and different weight coefficients have different weight coefficients by relational attention mechanisms, different nodes and edges. . After obtaining the node and side characteristics of the last layer, we use TRANSE as a training objective, and the three-pair of three-parallel group (H, R, T), minimize || H + r = T ||. In prediction, for each head entity and each relationship, all nodes of the spectrum are calculated as the candidate tail entity and the final tail entity is obtained.

Figure 17 concept undertaking the relationship replenishment

The current concept consistency is about 90% of the overall accuracy rate.

3.5 POI / SPU – Construction of Conceptual Relationship


Establish the association between the map concept and the US Mission examples, which uses information on multiple dimensions such as POI / SPU name, class, user comment. The establishment of the association is how to get information related to the concept of the concept from diversified information. Therefore, we call all the clauses related to the concept semantic in the example, and then use the discriminant model to determine the degree of correlation between the concept and the clause. The specific process is as follows:


. For the concept to be sagged, the concepts of the concept are obtained according to the synonym data.

Candidate subsequent

. According to the result of the synonym cluster, the candidate clause is recall from the merchant name, a group name, user comment, and more.

Discrimination model


. Using the concept – text association discriminant model (as shown in the figure below) whether the concept and the clause match match.

Figure 18 Concept marking discriminant model

The result of the marking. Adjust the threshold to get the final discrimination result.

Fourth, application practice

4.1 to the topic of materials


The US group to enterprise covers a wide range of knowledge, including parent-child, education, medical beauty, and recreational entertainment, while each area contains more small subsequent fields, so it is possible to assist in the knowledge maps in the construction field in different areas. Do your search recall, screen, and recommend.

In addition to common sense concept data in common sense concept spectrum, also contains a US group scene data, and a precipitate of the base algorithm capability, so it can help build a map data of the topic term.

With common sense map, supplement the lack of qualitative word data, build a reasonable product map map to help upgrade the search recall by searching, POI marking. At present, in the field of education, the map scale extends from the first 1000+ nodes to 2000+, and the synonym is extended from the thousandths to 20,000 +, which has achieved good results.

Classical word map construction process as shown below:

Figure 19 to the topic score construction process

4.2 reviews Search Guidance

Review SUG recommendation, while booting user cognition while helping to reduce users to complete search time and enhance search efficiency. So in the sug recommended to focus two objects: 1 Help enrich the user’s cognition, from the review-based POI, category search increase the cognition of natural text search; 2 refine the user search requirements, when the user is searching some When a pan-kind category word, help refine the user’s search needs.


In the common sense concept, a rich concept and the relationship between corresponding attributes and its attribute values ​​can be generated by a relatively more general Query. For example, a cake, you can produce a strawberry cake, a cheesecake, and an output of 6 inch cakes, pocket cakes, and more.

Search Guidance Word Query Production Example, as shown below:

Figure 20 Recommended Query Mining Example

4.3 to comprehensive medical beauty content marking

On the demonstration of medical beauty content, users usually have interested in the content of a particular medical beauty service, so some different service labels will be provided to help users screen accurate medical beauty content and accurately touch user needs. However, when the content of the label and medical beauty, there is more related errors, and the content that does not meet your needs after the user screens. The accuracy of the improvement marking can help users focus more focused on their needs.

With the concept of the concept – POI marking capacity and concept – UGC ‘s marking relationship, improve the accuracy of the label – content. By the assembly ability, there is significant increase in accuracy and recall.


: The accuracy rate is increased from 51% to 91% compared to keyword matching algorithms.

Recall rate


: The recall rate is raised from 77% to 91% through the concept of synonymous.

Figure 21 Example of medical content marking effect

Five, summary and prospect

We conducted a detailed introduction to the construction of common sense concept maps and the use of the US group scene. In the entire common sense concept spectrum, the relationship between the three types of nodes and the four categories in accordance with the business needs, respectively introduce the concept excavation algorithm, different types of relationship mining algorithms.


At present, our common sense concept map has 2 million concepts, relationships between 3 million + concepts, including upper and lower, synonymous, attributes, undertake, etc., the relationship between POI-concept is not included. At present, the accuracy of the overall relationship is around 90%, and it is still continuously optimizing the algorithm, and the expansion relationship will increase accuracy. Subsequent our common sense concept maps will continue to be improved, hoping to be fine.


[1] OnoE Y, Durrett G. Interpretable Entity Reperesentation THROUGH LARGE-Scale Typing [J]. ARXIV Preprint Arxiv: 2005.00147, 2020.

[2] Bosselut A, Rashkin H, SAP M, ET Al. Comet: Commonsense Transformers for Automatic Knowledge Graph Construction [J]. Arxiv Preprint Arxiv: 1906.05317, 2019.


[3] Soares L B, Fitzgerald N, Ling J, et al. Matching the blanks: Distributional Similarity For Relation Learning [J]. ARXIV Preprint Arxiv: 1906.03158, 2019.

[4] PENG H, GAO T, HAN X, ET Al. Learning from Context or Names? An Empirical Study on Neral Relation Extraction [J]. Arxiv Preprint Arxiv: 2010.01923, 2020.

[5] JIANG, Zhengbao, et al. “How can we know what language models know,” Transactions of the Association for Computational Linguistics 8 (2020): 423-438.

[6] li x l, liang p. prefix-tuning: Optimizing Continuous Prompts for generation [j]. Arxiv preprint arxiv: 2101.00190, 2021.

[7] Malaviya, Chaitanya, et al. “CommONSense Knowledge Base Completion with Structural and Semantic Context.” Proceedings of the Aaai Conference ON Artificial Intelligence. Vol. 34. No. 03. 2020.

[8] Li Hanzhen, Qi Li, Zhou Pengfei. “” “Intelligence Science 35.1 (2017): 51-55.

[9] Yan Bo, Zhang also, Su Hongyi et al. A commodity attribute clustering method based on user comments.

[10] WANG, Chengyu, xiaofeng He, And Aoying Zhou. “Open Relation Extraction for Chinese Noun Phrases.” IEEE Transactions On Knowledge and Data Engineering (2019).

[11] Li, Feng-lin, et al. “Alimekg: Domain Knowledge Graph Construction and Application IN E-Commerce.” Proceedings of the 29th ACM International Conference ON Information. 2020.

[12] yang, yaosheng, et al. “Distantly superning and reinforcement learning.” Proceedings of the 27th International Conference ON Computational Linguistics. 2018.

[13] Luo X, Liu L, Yang Y, ET Al. Alicoco: Alibaba E-Commerce Cognitive Concept Net [C] // Proceedings of The 2020 ACM Sigmod International Conference ON Management Of Data. 2020: 313-327.

[14] Devlin J, Chang M W, Lee K, et al. Bert: pre-training of deep bidirectional transformers for language understand [J]. Arxiv Preprint Arxiv: 1810.04805, 2018.

[15] CHENG H T, KOC L, HARMSEN J, ET Al. Wide & Deep Learning for Recomme Systems [C] // Proceedings of The 1st Workshop On Deep Learning for Recommender Systems. 2016: 7-10.

[16] liu J, Shanghai, WANG C, et al. Mining Quality Phrases from Massive Text Corpora [C] // Proceedings of The 2015 ACM Sigmod International Conference ON Management of Data. 2015: 1729-1744.

[17] Shen J, Wu Z, Lei D, et al Hiexpan:. Task-guided taxonomy construction by hierarchical tree expansion [C] // Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2018:. 2180- 2189.

[18] Huang J, Xie Y, Meng Y, et al Corel:. Seed-guided topical taxonomy construction by concept learning and relation transferring [C] // Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2020. : 1928-1936.

[19] Liu B, Guo W, Niu D, et al. A user-centered concept mining system for query and document understanding at tencent [C] // Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019 : 1831-1841.

[20] Choi E, Levy O, Choi Y, ET Al. Ultra-Fine Entity Typing [J]. Arxiv Preprint Arxiv: 1807.04905, 2018.

[21] XIE Q, DAI Z, HOVY E, ET Al. Unsupervised Data Augmentation for Consistency Training [J]. ARXIV Preprint Arxiv: 1904.12848, 2019.

[22] MAO X, WANG W, XU H, ET Al. REFLECTION Entity Alignment [C] // Proceedings of The 29th ACM International Conference ON Information & Knowledge Management. 2020: 1095-1104.

[23] CHEN J, Qiu X, Liu P, et al. Meta multi-task learning for sequence modeling [c] // proceedings of the aaai conference on artificial intelligence. 2018, 32 (1).

About the Author

Zongyu, Junjie, Hui Min, Fubao, Xu Jun, Xie Rui, Wuwei, etc., all from the US group search and NLP department – NLP center.

Job Offers

US Site Search and NLP Department / NLP Center is the core team responsible for the development of the United States and the United States, the mission is to build a world-class natural language to handle core technology and service capabilities, relying on NLP (natural language processing), Deep Learning (depth learning) Technology, Knowledge Graph (Knowledge Score), handle mass text data, provide intelligent text semantic understanding services for the US Group.

NLP Center Long-term Recruitment Natural Language Treatment Algorithm Expert / Machine Learning Algorithm, Interested Students can send resumes to

This article is a US group technical team, and the copyright belongs to the US group. Welcome non-commercial purposes such as sharing and communication, please indicate the content of “content reprinted from the US Technical Team”. This article is not allowed for commercial reprint or use without permission. Please send an email to to apply for authorization.